From 434c9663110ae55634d616318bd2392ffd365d58 Mon Sep 17 00:00:00 2001 From: <> Date: Tue, 4 Jun 2024 08:26:14 +0000 Subject: [PATCH] Deployed c7e6e4a with MkDocs version: 1.6.0 --- .../debugging_failed_builds/index.html | 66 ++++++++++++++----- adding_software/opening_pr/index.html | 39 ++++++++++- search/search_index.json | 2 +- 3 files changed, 89 insertions(+), 18 deletions(-) diff --git a/adding_software/debugging_failed_builds/index.html b/adding_software/debugging_failed_builds/index.html index ca68f857c..29f0f492d 100644 --- a/adding_software/debugging_failed_builds/index.html +++ b/adding_software/debugging_failed_builds/index.html @@ -1346,6 +1346,15 @@ + + +
While this might be faster than the easystack-based approach, this is not how the bot builds. So why it may reproduce the failure the bot encounters, it may not reproduce the bug at all (no failure) or run into different bugs. If you want to be sure, use the easystack-based approach.
+Rebuilding software requires an additional step at the beginning: the software first needs to be removed. We assume you've already checked out the feature branch. Then, you need to start the container with the additional --fakeroot
argument, otherwise you will not be able to remove files from the /cvmfs
prefix. Make sure to also include the --save
argument, as we will need the tarball later on. E.g.
+
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir} --fakeroot
+
EESSI-remove-software.sh
script
+
+This should remove any software specified in a rebuild easystack that got added in your current feature branch.
+Now, exit the container, paying attention to the instructions that are printed to resume later, e.g.:
+Saved contents of tmp directory '/tmp/eessi.WZxeFUemH2' to tarball '/home/myuser/pr507/EESSI-1711538681.tgz' (to resume session add '--resume /home/myuser/pr507/EESSI-1711538681.tgz')
+
Now, continue with the original instructions to start the container (i.e. either here or with this alternate approach) and make sure to add the --resume
flag. This way, you are resuming from the tarball (i.e. with the software removed that has to be rebuilt), but in a new container in which you have regular (i.e. no root) permissions.
If you are still in the prefix layer (i.e. after previously building something), exit it first: -
$ exit
-logout
-Leaving Gentoo Prefix with exit status 0
+
Then, source the EESSI init script (again):
-Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash
-Environment set up to use EESSI (2023.06), have fun!
-{EESSI 2023.06} Apptainer>
+Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash
+Environment set up to use EESSI (2023.06), have fun!
+{EESSI 2023.06} Apptainer>
Note
If you are in a SLURM environment, make sure to run for i in $(env | grep SLURM); do unset "${i%=*}"; done
to unset any SLURM environment variables. Failing to do so will cause mpirun
to pick up on these and e.g. infer how many slots are available. If you run into errors of the form "There are not enough slots available in the system to satisfy the X slots that were requested by the application:", you probably forgot this step.
Then, execute the run_tests.sh
script. We are assuming you are still in the root of the software-layer
repository that you cloned earlier:
-
./run_tests.sh
+
if all goes well, you should see (part of) the EESSI test suite being run by ReFrame, finishing with something like
-[ PASSED ] Ran X/Y test case(s) from Z check(s) (0 failure(s), 0 skipped, 0 aborted)
+
Note
@@ -2226,23 +2262,23 @@ Known causes of issues in EESSIThe custom system prefix of the compatibility layer¶
Some installations might expect the system root (sysroot, for short) to be in /
. However, in case of EESSI, we are building against the OS in the compatibility layer. Thus, our sysroot is something like ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}
. This can cause issues if installation procedures assume the sysroot is in /
.
One example of a sysroot issue was in installing wget
. The EasyConfig for wget
defined
-
# make sure pkg-config picks up system packages (OpenSSL & co)
-preconfigopts = "export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && "
-configopts = '--with-ssl=openssl '
+# make sure pkg-config picks up system packages (OpenSSL & co)
+preconfigopts = "export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && "
+configopts = '--with-ssl=openssl '
This will not work in EESSI, since the OpenSSL should be picked up from the compatibility layer. This was fixed by changing the EasyConfig to read
-preconfigopts = "export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && "
-configopts = '--with-ssl=openssl
+preconfigopts = "export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && "
+configopts = '--with-ssl=openssl
The %(sysroot)s
is a template value which EasyBuild will resolve to the value that has been configured in EasyBuild for sysroot
(it is one of the fields printed by eb --show-config
if a non-standard sysroot is configured).
If you encounter issues where the installation can not find something that is normally provided by the OS (i.e. not one of the dependencies in your module environment), you may need to resort to a similar approach.
The writeable overlay¶
The writeable overlay in the container is known to be a bit slow sometimes. Thus, we have seen tests failing because they exceed some timeout (e.g. this issue).
To investigate if the writeable overlay is somehow the issue, you can make sure the installation gets done somewhere else, e.g. in the temporary directory in /tmp
that you created as workdir. To do this, set
-export EASYBUILD_INSTALLPATH=${WORKDIR}
+
after the step in which you have sourced the configure_easybuild
script. Note that in order to find (with module av
) any modules that get installed here, you will need to add this path to the MODULEPATH
:
-module use ${EASYBUILD_INSTALLPATH}/modules/all
+
Then, retry building the software (as described above). If the build now succeeds, you know that indeed the writeable overlay caused the issue. We have to build in this writeable overlay when we do real deployments. Thus, if you hit such a timeout, try to see if you can (temporarily) modify the timeout value in the test so that it passes.
diff --git a/adding_software/opening_pr/index.html b/adding_software/opening_pr/index.html
index f8b1df75a..e32c99b4f 100644
--- a/adding_software/opening_pr/index.html
+++ b/adding_software/opening_pr/index.html
@@ -1214,6 +1214,15 @@
+
+
+
+
+
+ Rebuilding software
+
+
+
@@ -1683,6 +1692,15 @@
+
+
+
+
+
+ Rebuilding software
+
+
+
@@ -1755,8 +1773,9 @@ Creating a pull requestecho ' - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml
+echo ' - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml
+Note that the naming scheme is standardized and should be eessi-<eessi_version>-eb-<eb_version>-<toolchain_version>.yml
. See the official EasyBuild documentation on easystack files for more information on the syntax.
4) Stage and commit the changes into your your branch with a sensible message
git add easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml
git commit -m "{2023.06}[GCC/12.3.0] example 1.2.3"
@@ -1770,6 +1789,22 @@ Creating a pull request should almost instantly create a comment in your pull request
with an overview of how it is configured - you will need this information when providing build instructions.
+Rebuilding software¶
+
We typically do not rebuild software, since (strictly speaking) this breaks reproducibility for anyone using the software. However, there are certain situations in which it is difficult or impossible to avoid.
+To do a rebuild, you add the software you want to rebuild to a dedicated easystack file in the rebuilds
directory. Use the following naming convention: YYYYMMDD-eb-<EB_VERSION>-<APPLICATION_NAME>-<APPLICATION_VERSION>-<SHORT_DESCRIPTION>.yml
, where YYYYMMDD
is the opening date of your PR. E.g. 2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml
was added in a PR on the 6th of May 2024 and used to rebuild CUDA-12.1.1 using EasyBuild 4.9.1 to resolve an issue with some runtime libraries missing from the initial CUDA 12.1.1 installation.
+At the top of your easystack file, please use comments to include a short description, and make sure to include any relevant links to related issues (e.g. from the GitHub repositories of EESSI, EasyBuild, or the software you are rebuilding).
+As an example, consider the full easystack file (2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml
) used for the aforementioned CUDA rebuild:
+# 2024.05.06
+# Original matching of files we could ship was not done correctly. We were
+# matching the basename for files (e.g., libcudart.so from libcudart.so.12)
+# rather than the name stub (libcudart)
+# See https://github.com/EESSI/software-layer/pull/559
+easyconfigs:
+ - CUDA-12.1.1.eb:
+ options:
+ accept-eula-for: CUDA
+
+By separating rebuilds in dedicated files, we still maintain a complete software bill of materials: it is transparent what got rebuilt, for which reason, and when.
@@ -1790,7 +1825,7 @@ Creating a pull request
- December 6, 2023
+ June 3, 2024
diff --git a/search/search_index.json b/search/search_index.json
index 525a265f1..5c4ccc3cb 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to the EESSI project documentation!","text":"Quote
What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?
The European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is a collaboration between different European partners in HPC community. The goal of this project is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure.
"},{"location":"#quick-links","title":"Quick links","text":" - What is EESSI?
- Contact info
For users:
software.eessi.io
repository - Access, initialize and use EESSI
- Overview of software
- How to run EESSI test suite
- Get help or report issue
For system administrators:
- EESSI layered structure: filesystem, compatibility, software
- Installing EESSI
- Setting up a mirror server
For contributors:
- Adding software to EESSI
- Meetings
The EESSI project was covered during a quick AWS HPC Tech Short video (15 June 2023):
"},{"location":"bot/","title":"Build-test-deploy bot","text":"Building, testing, and deploying software is done by one or more bot instances.
The EESSI build-test-deploy bot is implemented as a GitHub App in the eessi-bot-software-layer
repository.
It operates in the context of pull requests to the compatibility-layer
repository or the software-layer
repository, and follows the instructions supplied by humans, so the procedure of adding software to EESSI is semi-automatic.
It leverages the scripts provided in the bot/
subdirectory of the target repository (see for example here), like bot/build.sh
to build software, and bot/check-result.sh
to check whether the software was built correctly.
"},{"location":"bot/#high-level-design","title":"High-level design","text":"The bot consists of two components: the event handler, and the job manager.
"},{"location":"bot/#event-handler","title":"Event handler","text":"The bot event handler is responsible for handling GitHub events for the GitHub repositories it is registered to.
It is triggered for every event that it receives from GitHub. Most events are ignored, but specific events trigger the bot to take action.
Examples of actionable events are submitting of a comment that starts with bot:
, which may specify an instruction for the bot like building software, or adding a bot:deploy
label (see deploying).
"},{"location":"bot/#job-manager","title":"Job manager","text":"The bot job manager is responsible for monitoring the queued and running jobs, and reporting back when jobs completed.
It runs every couple of minutes as a cron job.
"},{"location":"bot/#basics","title":"Basics","text":"Instructions for the bot should always start with bot:
.
To get help from the bot, post a comment with bot: help
.
To make the bot report how it is configured, post a comment with bot: show_config
.
"},{"location":"bot/#permissions","title":"Permissions","text":"The bot is configured to only act on instructions issued by specific GitHub accounts.
There are separate configuration options for allowing to send instructions to the bot, to trigger building of software, and to deploy software installations in to the EESSI repository.
Note
Ask for help in the #software-layer-bot
channel of the EESSI Slack if needed!
"},{"location":"bot/#building","title":"Building","text":"To instruct the bot to build software, one or more build
instructions should be issued by posting a comment in the pull request (see also here).
The most basic build instruction that can be sent to the bot is:
bot: build\n
Warning
Only use bot: build
if you are confident that it is OK to do so.
Most likely, you want to supply one or more filters to avoid that the bot builds for all its configurations.
"},{"location":"bot/#filters","title":"Filters","text":"Build instructions can include filters that are applied by each bot instance to determine which builds should be executed, based on:
instance
: the name
of the bot instance, for example instance:aws
for the bot instance running in AWS; repository
: the target repository, for example eessi-2023.06-software
which corresponds to the 2023.06 version of the EESSI software layer; architecture
: the name of the CPU microarchitecture, for example x86_64/amd/zen2
;
Note
Use :
as separator to specify a value for a particular filter, do not add spaces after the :
.
The bot recognizes shorthands for the supported filters, so you can use inst:...
instead of instance:...
, repo:...
instead of repository:...
, and arch:...
instead of architecture:...
.
"},{"location":"bot/#combining-filters","title":"Combining filters","text":"You can combine multiple filters in a single build
instruction. Separate filters with a space, order of filters does not matter.
For example:
bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen2\n
"},{"location":"bot/#multiple-build-instructions","title":"Multiple build instructions","text":"You can issue multiple build instructions in a single comment, even across multiple bot instances, repositories, and CPU targets. Specify one build instruction per line.
For example:
bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen3 inst:aws\nbot: build repo:eessi-hpc.org-2023.06-software arch:aarch64/generic inst:azure\n
Note
The bot applies the filters with partial matching, which you can use to combine multiple build instructions into a single one.
For example, if you only want to build for all aarch64
CPU targets, you can use arch:aarch64
as filter.
The same applies to the instance
and repository
filters.
"},{"location":"bot/#behind-the-scenes","title":"Behind-the-scenes","text":""},{"location":"bot/#processing-build-instructions","title":"Processing build instructions","text":"When the bot receives build instructions through a comment in a pull request, they are processed by the event handler component. It will:
1) Combine its active configuration (instance name, repositories, supported CPU targets) and the build instructions to prepare a list of jobs to submit;
2) Create a working directory for each job, including a Slurm job script that runs the bot/build.sh
script in the context of the changes proposed in the pull request to build the software, and runs bot/check-result.sh
script at the end to check whether the build was successful;
3) Submit each prepared job to a workernode that can build for the specified CPU target, and put a hold on it.
"},{"location":"bot/#managing-build-jobs","title":"Managing build jobs","text":"During the next iteration of the job manager, the submitted jobs are released and queued for execution.
The job manager also monitors the running jobs at regular intervals, and reports back in the pull request when a job has completed. It also reports the result (SUCCESS
or FAILURE
), based on the result of the bot/check-result.sh
script.
"},{"location":"bot/#artefacts","title":"Artefacts","text":"If all goes well, each job should produce a tarball as an artefact, which contains the software installations and the corresponding environment module files.
The message reported by the job manager provides an overview of the contents of the artefact, which was created by the bot/check-result.sh
script.
"},{"location":"bot/#testing","title":"Testing","text":"Warning
The test phase is not implemented yet in the bot.
We intend to use the EESSI test suite in different OS configurations to verify that the software that was built works as expected.
"},{"location":"bot/#deploying","title":"Deploying","text":"To deploy the artefacts that were obtained in the build phase, you should add the bot: deploy
label to the pull request.
This will trigger the event handler to upload the artefacts for ingestion into the EESSI repository.
"},{"location":"bot/#behind-the-scenes_1","title":"Behind-the-scenes","text":"The current setup for the software-layer repository, is as follows:
- The bot deploys the artefacts (tarballs) to an S3 bucket in AWS, along with a metadata file, using the
eessi-upload-to-staging
script; - A cron job that runs every couple of minutes on the CernVM-FS Stratum-0 server opens a pull request to the (private) EESSI/staging repository, to move the metadata file for each uploaded tarball from the
staged
to the approved
directory; - Once that pull request gets merged, the target is automatically ingested into the EESSI repository by a cron job on the Stratum-0 server, and the metadata file is moved from
approved
to ingested
in the EESSI/staging
repository;
"},{"location":"compatibility_layer/","title":"Compatibility layer","text":"The middle layer of the EESSI project is the compatibility layer, which ensures that our scientific software stack is compatible with different client operating systems (different Linux distributions, macOS and even Windows via WSL).
For this we rely on Gentoo Prefix, by installing a limited set of Gentoo Linux packages in a non-standard location (a \"prefix\"), using Gentoo's package manager Portage.
The compatible layer is maintained via our https://github.com/EESSI/compatibility-layer GitHub repository.
"},{"location":"contact/","title":"Contact info","text":"For more information:
- Visit our website
- Consult our documentation
- Ask for help at our support portal
- Join our Slack channel
- Reach out to one of the project partners
- Check out our GitHub repositories
- Follow us on Twitter
"},{"location":"filesystem_layer/","title":"Filesystem layer","text":""},{"location":"filesystem_layer/#cernvm-file-system-cernvm-fs","title":"CernVM File System (CernVM-FS)","text":"The bottom layer of the EESSI project is the filesystem layer, which is responsible for distributing the software stack.
For this we rely on CernVM-FS (or CVMFS for short), a network file system used to distribute the software to the clients in a fast, reliable and scalable way.
CVMFS was created over 10 years ago specifically for the purpose of globally distributing a large software stack. For the experiments at the Large Hadron Collider, it hosts several hundred million files and directories that are distributed to the order of hundred thousand client computers.
The hierarchical structure with multiple caching layers (Stratum-0, Stratum-1's located at partner sites and local caching proxies) ensures good performance with limited resources. Redundancy is provided by using multiple Stratum-1's at various sites. Since CVMFS is based on the HTTP protocol, the ubiquitous Squid caching proxy can be leveraged to reduce server loads and improve performance at large installations (such as HPC clusters). Clients can easily mount the file system (read-only) via a FUSE (Filesystem in Userspace) module.
For a (basic) introduction to CernVM-FS, see this presentation.
Detailed information about how we configure CVMFS is available at https://github.com/EESSI/filesystem-layer.
"},{"location":"filesystem_layer/#eessi-infrastructure","title":"EESSI infrastructure","text":"For both the pilot and production repositories, EESSI hosts a CernVM-FS Stratum 0 and a number of public Stratum 1 servers. Client systems using EESSI by default connect against the public EESSI CernVM-FS Stratum 1 servers. The status of the infrastructure for the pilot repository is displayed at http://status.eessi-infra.org, while for the production repository it is displayed at https://status.eessi.io.
"},{"location":"gpu/","title":"GPU support","text":"More information on the actions that must be performed to ensure that GPU software included in EESSI can use the GPU in your system is available below.
Please open a support issue if you need help or have questions regarding GPU support.
Make sure the ${EESSI_VERSION}
version placeholder is defined!
In this page, we use ${EESSI_VERSION}
as a placeholder for the version of the EESSI repository, for example:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}\n
Before inspecting paths, or executing any of the specified commands, you should define $EESSI_VERSION
first, for example with:
export EESSI_VERSION=2023.06\n
"},{"location":"gpu/#nvidia","title":"Support for using NVIDIA GPUs","text":"EESSI supports running CUDA-enabled software. All CUDA-enabled modules are marked with the (gpu)
feature, which is visible in the output produced by module avail
.
"},{"location":"gpu/#nvidia_drivers","title":"NVIDIA GPU drivers","text":"For CUDA-enabled software to run, it needs to be able to find the NVIDIA GPU drivers of the host system. The challenge here is that the NVIDIA GPU drivers are not always in a standard system location, and that we can not install the GPU drivers in EESSI (since they are too closely tied to the client OS and GPU hardware).
"},{"location":"gpu/#cuda_sdk","title":"Compiling CUDA software","text":"An additional requirement is necessary if you want to be able to compile CUDA-enabled software using a CUDA installation included in EESSI. This requires a full CUDA SDK, but the CUDA SDK End User License Agreement (EULA) does not allow for full redistribution. In EESSI, we are (currently) only allowed to redistribute the files needed to run CUDA software.
Full CUDA SDK only needed to compile CUDA software
Without a full CUDA SDK on the host system, you will still be able to run CUDA-enabled software from the EESSI stack, you just won't be able to compile additional CUDA software.
Below, we describe how to make sure that the EESSI software stack can find your NVIDIA GPU drivers and (optionally) full installations of the CUDA SDK.
"},{"location":"gpu/#host_injections","title":"host_injections
variant symlink","text":"In the EESSI repository, a special directory has been prepared where system administrators can install files that can be picked up by software installations included in EESSI. This gives the ability to administrators to influence the behaviour (and capabilities) of the EESSI software stack.
This special directory is located in /cvmfs/software.eessi.io/host_injections
, and it is a CernVM-FS Variant Symlink: a symbolic link for which the target can be controlled by the CernVM-FS client configuration (for more info, see 'Variant Symlinks' in the official CernVM-FS documentation).
Default target for host_injections
variant symlink
Unless otherwise configured in the CernVM-FS client configuration for the EESSI repository, the host_injections
symlink points to /opt/eessi
on the client system:
$ ls -l /cvmfs/software.eessi.io/host_injections\nlrwxrwxrwx 1 cvmfs cvmfs 10 Oct 3 13:51 /cvmfs/software.eessi.io/host_injections -> /opt/eessi\n
As an example, let's imagine that we want to use a architecture-specific location on a shared filesystem as the target for the symlink. This has the advantage that one can make changes under host_injections
that affect all nodes which share that CernVM-FS configuration. Configuring this in your CernVM-FS configuration would mean adding the following line in the client configuration file:
EESSI_HOST_INJECTIONS=/shared_fs/path\n
Don't forget to reload the CernVM-FS configuration
After making a change to a CernVM-FS configuration file, you also need to reload the configuration:
sudo cvmfs_config reload\n
All CUDA-enabled software in EESSI expects the CUDA drivers to be available in a specific subdirectory of this host_injections
directory. In addition, installations of the CUDA SDK included EESSI are stripped down to the files that we are allowed to redistribute; all other files are replaced by symbolic links that point to another specific subdirectory of host_injections
. For example:
$ ls -l /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\nlrwxrwxrwx 1 cvmfs cvmfs 109 Dec 21 14:49 /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc -> /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\n
If the corresponding full installation of the CUDA SDK is available there, the CUDA installation included in EESSI can be used to build CUDA software.
"},{"location":"gpu/#nvidia_eessi_native","title":"Using NVIDIA GPUs via a native EESSI installation","text":"Here, we describe the steps to enable GPU support when you have a native EESSI installation on your system.
Required permissions
To enable GPU support for EESSI on your system, you will typically need to have system administration rights, since you need write permissions on the folder to the target directory of the host_injections
symlink.
"},{"location":"gpu/#exposing-nvidia-gpu-drivers","title":"Exposing NVIDIA GPU drivers","text":"To install the symlinks to your GPU drivers in host_injections
, run the link_nvidia_host_libraries.sh
script that is included in EESSI:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/link_nvidia_host_libraries.sh\n
This script uses ldconfig
on your host system to locate your GPU drivers, and creates symbolic links to them in the correct location under host_injections
directory. It also stores the CUDA version supported by the driver that the symlinks were created for.
Re-run link_nvidia_host_libraries.sh
after NVIDIA GPU driver update
You should re-run this script every time you update the NVIDIA GPU drivers on the host system.
Note that it is safe to re-run the script even if no driver updates were done: the script should detect that the current version of the drivers were already symlinked.
"},{"location":"gpu/#installing-full-cuda-sdk-optional","title":"Installing full CUDA SDK (optional)","text":"To install a full CUDA SDK under host_injections
, use the install_cuda_host_injections.sh
script that is included in EESSI:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh\n
For example, to install CUDA 12.1.1 in the directory that the host_injections
variant symlink points to, using /tmp/$USER/EESSI
as directory to store temporary files:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --cuda-version 12.1.1 --temp-dir /tmp/$USER/EESSI --accept-cuda-eula\n
You should choose the CUDA version you wish to install according to what CUDA versions are included in EESSI; see the output of module avail CUDA/
after setting up your environment for using EESSI. You can run /cvmfs/software.eessi.io/scripts/install_cuda_host_injections.sh --help
to check all of the options.
Tip
This script uses EasyBuild to install the CUDA SDK. For this to work, two requirements need to be satisfied:
module load EasyBuild
should work (or the eb
command is already available in the environment); - The version of EasyBuild being used should provide the requested version of the CUDA easyconfig file (in the example case above, that's
CUDA-12.1.1.eb
).
You can rely on the EasyBuild installation that is included in EESSI for this.
Alternatively, you may load an EasyBuild module manually before running the install_cuda_host_injections.sh
script to make an eb
command available.
"},{"location":"gpu/#nvidia_eessi_container","title":"Using NVIDIA GPUs via EESSI in a container","text":"We focus here on the Apptainer/Singularity use case, and have only tested the --nv
option to enable access to GPUs from within the container.
If you are using the EESSI container to access the EESSI software, the procedure for enabling GPU support is slightly different and will be documented here eventually.
"},{"location":"gpu/#exposing-nvidia-gpu-drivers_1","title":"Exposing NVIDIA GPU drivers","text":"When running a container with apptainer
or singularity
it is not necessary to run the install_cuda_host_injections.sh
script since both these tools use $LD_LIBRARY_PATH
internally in order to make the host GPU drivers available in the container.
The only scenario where this would be required is if $LD_LIBRARY_PATH
is modified or undefined.
"},{"location":"gpu/#gpu_cuda_testing","title":"Testing the GPU support","text":"The quickest way to test if software installations included in EESSI can access and use your GPU is to run the deviceQuery
executable that is part of the CUDA-Samples
module:
module load CUDA-Samples\ndeviceQuery\n
If both are successful, you should see information about your GPU printed to your terminal."},{"location":"meetings/","title":"Meetings","text":""},{"location":"meetings/#monthly-meetings-online","title":"Monthly meetings (online)","text":"Online EESSI update meeting, every 1st Thursday of the month at 14:00 CE(S)T.
More info can be found on the EESSI wiki.
"},{"location":"meetings/#physical-meetings","title":"Physical meetings","text":" - EESSI Community Meeting in Amsterdam (NL), 14-16 Sept 2022
"},{"location":"meetings/#physical-meetings-archive","title":"Physical meetings (archive)","text":""},{"location":"meetings/#2020","title":"2020","text":" - Meeting in Groningen (NL), 16 Jan 2020
- Meeting in Delft (NL), 5 Mar 2020
"},{"location":"meetings/#2019","title":"2019","text":" - Meeting in Cambridge (UK), 20-21 May 2019
"},{"location":"overview/","title":"Overview of the EESSI project","text":""},{"location":"overview/#scope-goals","title":"Scope & Goals","text":"Through the EESSI project, we want to set up a shared stack of scientific software installations, and by doing so avoid a lot of duplicate work across HPC sites.
For end users, we want to provide a uniform user experience with respect to available scientific software, regardless of which system they use.
Our software stack should work on laptops, personal workstations, HPC clusters and in the cloud, which means we will need to support different CPUs, networks, GPUs, and so on. We hope to make this work for any Linux distribution and maybe even macOS and Windows via WSL, and a wide variety of CPU architectures (Intel, AMD, ARM, POWER, RISC-V).
Of course we want to focus on the performance of the software, but also on automating the workflow for maintaining the software stack, thoroughly testing the installations, and collaborating efficiently.
"},{"location":"overview/#inspiration","title":"Inspiration","text":"The EESSI concept is heavily inspired by Compute Canada software stack, which is a shared software stack used on all 5 major national systems in Canada and a bunch of smaller ones.
The design of the Compute Canada software stack is discussed in detail in the PEARC'19 paper \"Providing a Unified Software Environment for Canada\u2019s National Advanced Computing Centers\".
It has also been presented at the 5th EasyBuild User Meetings (slides, recorded talk), and is well documented.
"},{"location":"overview/#layered-structure","title":"Layered structure","text":"The EESSI project consists of 3 layers.
The bottom layer is the filesystem layer, which is responsible for distributing the software stack across clients.
The middle layer is a compatibility layer, which ensures that the software stack is compatible with multiple different client operating systems.
The top layer is the software layer, which contains the actual scientific software applications and their dependencies.
The host OS still provides a couple of things, like drivers for network and GPU, support for shared filesystems like GPFS and Lustre, a resource manager like Slurm, and so on.
"},{"location":"overview/#opportunities","title":"Opportunities","text":"We hope to collaborate with interested parties across the HPC community, including HPC centres, vendors, consultancy companies and scientific software developers.
Through our software stack, HPC users can seamlessly hop between sites, since the same software is available everywhere.
We can leverage each others work with respect to providing tested and properly optimized scientific software installations more efficiently, and provide a platform for easy benchmarking of new systems.
By working together with the developers of scientific software we can provide vetted installations for the broad HPC community.
"},{"location":"overview/#challenges","title":"Challenges","text":"There are many challenges in an ambitious project like this, including (but probably not limited to):
- Finding time and manpower to get the software stack set up properly;
- Leveraging system sources like network interconnect (MPI & co), accelerators (GPUs), ...;
- Supporting CPU architectures other than x86_64, including ARM, POWER, RISC-V, ...
- Dealing with licensed software, like Intel tools, MATLAB, ANSYS, ...;
- Integration with resource managers (Slurm) and vendor provided software (Cray PE);
- Convincing HPC site admins to adopt EESSI;
"},{"location":"overview/#current-status","title":"Current status","text":"(June 2020)
We are actively working on the EESSI repository, and are organizing monthly meetings to discuss progress and next steps forward.
Keep an eye on our GitHub repositories at https://github.com/EESSI and our Twitter feed.
"},{"location":"partners/","title":"Project partners","text":""},{"location":"partners/#delft-university-of-technology-the-netherlands","title":"Delft University of Technology (The Netherlands)","text":" - Robbert Eggermont
- Koen Mulderij
"},{"location":"partners/#dell-technologies-europe","title":"Dell Technologies (Europe)","text":" - Walther Blom, High Education & Research
- Jaco van Dijk, Higher Education
"},{"location":"partners/#eindhoven-university-of-technology","title":"Eindhoven University of Technology","text":" - Alain van Hoof, HPC-Lab
"},{"location":"partners/#ghent-university-belgium","title":"Ghent University (Belgium)","text":" - Kenneth Hoste, HPC-UGent
"},{"location":"partners/#hpcnow-spain","title":"HPCNow! (Spain)","text":" - Oriol Mula Valls
"},{"location":"partners/#julich-supercomputing-centre-germany","title":"J\u00fclich Supercomputing Centre (Germany)","text":" - Alan O'Cais
"},{"location":"partners/#university-of-cambridge-united-kingdom","title":"University of Cambridge (United Kingdom)","text":" - Mark Sharpley, Research Computing Services Division
"},{"location":"partners/#university-of-groningen-the-netherlands","title":"University of Groningen (The Netherlands)","text":" - Bob Dr\u00f6ge, Center for Information Technology
- Henk-Jan Zilverberg, Center for Information Technology
"},{"location":"partners/#university-of-twente-the-netherlands","title":"University of Twente (The Netherlands)","text":" - Geert Jan Laanstra, Electrical Engineering, Mathematics and Computer Science (EEMCS)
"},{"location":"partners/#university-of-oslo-norway","title":"University of Oslo (Norway)","text":" - Terje Kvernes
"},{"location":"partners/#university-of-bergen-norway","title":"University of Bergen (Norway)","text":" - Thomas R\u00f6blitz
"},{"location":"partners/#vrije-universiteit-amsterdam-the-netherlands","title":"Vrije Universiteit Amsterdam (The Netherlands)","text":" - Peter Stol
"},{"location":"partners/#surf-the-netherlands","title":"SURF (The Netherlands)","text":" - Caspar van Leeuwen
- Marco Verdicchio
- Bas van der Vlies
"},{"location":"software_layer/","title":"Software layer","text":"The top layer of the EESSI project is the software layer, which provides the actual scientific software installations.
To install the software we include in our stack, we use EasyBuild, a framework for installing scientific software on HPC systems. These installations are optimized for a particular system architecture (specific CPU and GPU generation).
To access these software installation we provide environment module files and use Lmod, a modern environment modules tool which has been widely adopted in the HPC community in recent years.
We leverage the archspec Python library to automatically select the best suited part of the software stack for a particular host, based on its system architecture.
The software layer is maintained through our https://github.com/EESSI/software-layer GitHub repository.
"},{"location":"software_testing/","title":"Software testing","text":"This page has been replaced with test-suite, update your bookmarks!
"},{"location":"support/","title":"Getting support for EESSI","text":"Thanks to the MultiXscale EuroHPC project we are able to provide support to the users of EESSI.
The EESSI support portal is hosted in GitLab: https://gitlab.com/eessi/support.
"},{"location":"support/#open-issue","title":"How to report a problem or ask a question","text":"We recommend you to use a GitLab account if you want to get help from the EESSI support team.
If you have a GitLab account you can submit your problems or questions on EESSI via the issue tracker of the EESSI support portal at https://gitlab.com/eessi/support/-/issues. Please use one of the provided templates (report a problem, software request, question, ...) when creating an issue.
You can also contact us via our e-mail address support (@) eessi.io
, which will automatically create a (private) issue in the EESSI support portal. When you send us an email, please provide us with as much information as possible on your question or problem. You can find an overview of the information that we would like to receive in the README of the EESSI support portal.
"},{"location":"support/#level-of-support","title":"Level of Support","text":"We provide support for EESSI according to a \"reasonable effort\" standard. That means we will go into reasonable effort to help you, but we may not have the time to explore every potential cause, and it may not lead to a (quick) solution. You can compare this to the level of support you typically get from other active open source projects.
Note that the more complete your reported issue is (e.g. description of the error, what you ran, the software environment in which you ran, minimal reproducer, etc.) the bigger the chance is that we can help you with \"reasonable effort\".
"},{"location":"support/#what-do-we-provide-support-for","title":"What do we provide support for","text":""},{"location":"support/#accessing-and-using-the-eessi-software-stack","title":"Accessing and using the EESSI software stack","text":"If you have trouble connecting to the software stack, such as trouble related to installing or configuring CernVM-FS to access the EESSI filesystem layer, or running the software installations included in the EESSI compatibility layer or software layer, please contact us.
Note that we can only help with problems related to the software installations (getting the software to run, to perform as expected, etc.). We do not provide support for using specific features of the provided software, nor can we fix (known or unknown) bugs in the software included in EESSI. We can only help with diagnosing and fixing problems that are caused by how the software was built and installed in EESSI.
"},{"location":"support/#software-requests","title":"Software requests","text":"We are open to software requests for software that is not included in EESSI yet.
The quickest way to add additional software to EESSI is by contributing it yourself as a community contribution, please see the documentation on adding software.
Alternatively, you can send in a request to our support team. Please try to provide as much information on the software as possible: preferably use the issue template (which requires you to log in to GitLab), or make sure to cover the items listed here.
Be aware that we can only provide software that has an appropriate open source license.
"},{"location":"support/#eessi-test-suite","title":"EESSI test suite","text":"If you are using the EESSI test suite, you can get help via the EESSI support portal.
"},{"location":"support/#build-and-deploy-bot","title":"Build-and-deploy bot","text":"If you are using the EESSI build-and-deploy bot, you can get help via the EESSI support portal.
"},{"location":"support/#what-do-we-not-provide-support-for","title":"What do we not provide support for","text":"Do not contact the EESSI support team to get help with using software that is included in EESSI, unless you think the problems you are seeing are related to how the software was built and installed.
Please consult the documentation of the software you are using, or contact the developers of the software directly, if you have questions regarding using the software, or if you think you have found a bug.
Funded by the European Union. This work has received funding from the European High Performance Computing Joint Undertaking (JU) and countries participating in the project under grant agreement No 101093169.
"},{"location":"talks/","title":"Talks related to EESSI","text":""},{"location":"talks/#2023","title":"2023","text":" - Streaming Optimised Scientific Software: an Introduction to EESSI (online tutorial, 5 Dec 2023)
- Best Practices for CernVM-FS in HPC (online tutorial, 4 Dec 2023)
- Streaming optimized scientific software installations on any Linux distro with EESSI (PackagingCon 2023, 27 Oct 2023)
- Making scientific software EESSI - and fast (8-min AWS HPC Tech Short, 15 June 2023)
"},{"location":"adding_software/building_software/","title":"Building software","text":"(for maintainers)
"},{"location":"adding_software/building_software/#bot_build","title":"Instructing the bot to build","text":"Once the pull request is open, you can instruct the bot to build the software by posting a comment.
For more information, see the building section in the bot documentation.
Warning
Permission to trigger building of software must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/building_software/#guidelines","title":"Guidelines","text":" -
It may be wise to let the bot perform a test build first, rather than letting it build for a wide range of CPU targets.
-
If one of the builds failed, you can let the bot retry that specific build.
-
Make sure that the software has been built correctly for all CPU targets before you deploy!
"},{"location":"adding_software/building_software/#checking-the-builds","title":"Checking the builds","text":"If all goes well, you should see SUCCESS
for each build, along with button to get more information about the checks that were performed, and metadata information on the resulting artefact .
Note
Make sure the result is what you expect it to be for all builds before you deploy!
"},{"location":"adding_software/building_software/#failing-builds","title":"Failing builds","text":"Warning
The bot will currently not give you any information on how or why a build is failing.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/building_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.
For more information, see the deploying section in the bot documentation.
Warning
Permission to trigger deployment of software installations must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/building_software/#merging-the-pull-request","title":"Merging the pull request","text":"You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.
Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.
Note
This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml
) that checks for missing installations, in the correct branch (for example 2023.06
) of the software-layer.
If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!
Warning
You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/building_software/#getting-help","title":"Getting help","text":"If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer
channel of the EESSI Slack.
"},{"location":"adding_software/contribution_policy/","title":"Contribution policy","text":"(version v0.1.0 - updated 9 Nov 2023)
Note
This policy is subject to change, please check back regularly.
"},{"location":"adding_software/contribution_policy/#purpose","title":"Purpose","text":"The purpose of this contribution policy is to provide guidelines for adding software to EESSI.
It informs about what requirements must be met in order for software to be eligible for inclusion in the EESSI software layer.
"},{"location":"adding_software/contribution_policy/#requirements","title":"Requirements","text":"The following requirements must be taken into account when adding software to EESSI.
Note that additional restrictions may apply in specific cases that are currently not covered explicitly by this policy.
"},{"location":"adding_software/contribution_policy/#freely_redistributable_software","title":"i) Freely redistributable software","text":"Only freely redistributable software can be added to the EESSI repository, and we strongly prefer including only open source software in EESSI.
Make sure that you are aware of the relevant software licenses, and that redistribution of the software you want to add to EESSI is allowed.
For more information about a specific software license, see the SPDX license list.
Note
We intend to automatically verify that this requirement is met, by requiring that the SPDX license identifier is provided for all software included in EESSI.
"},{"location":"adding_software/contribution_policy/#built_by_bot","title":"ii) Built by the bot","text":"All software included in the EESSI repository must be built autonomously by our bot .
For more information, see our semi-automatic software installation procedure.
"},{"location":"adding_software/contribution_policy/#easybuild","title":"iii) Built and installed with EasyBuild","text":"We currently require that all software installations in EESSI are built and installed using EasyBuild.
We strongly prefer that the latest release of EasyBuild that is available at the time is used to add software to EESSI.
The use of --from-pr
and --include-easyblocks-from-pr
to pull in changes to EasyBuild that are required to make the installation work correctly in EESSI is allowed, but only if that is strictly required (that is, if those changes are not included yet in the latest EasyBuild release).
"},{"location":"adding_software/contribution_policy/#supported_toolchain","title":"iv) Supported compiler toolchain","text":"A compiler toolchain that is still supported by the latest EasyBuild release must be used for building the software.
For more information on supported toolchains, see the EasyBuild toolchain support policy.
"},{"location":"adding_software/contribution_policy/#recent_toolchains","title":"v) Recent toolchain versions","text":"We strongly prefer adding software to EESSI that was built with a recent compiler toolchain.
When adding software to a particular version of EESSI, you should use a toolchain version that is already installed.
If you would like to see an additional toolchain version being added to a particular version of EESSI, please open a support request for this, and motivate your request.
"},{"location":"adding_software/contribution_policy/#recent_software_versions","title":"vi) Recent software versions","text":"We strongly prefer adding sufficiently recent software versions to EESSI.
If you would like to add older software versions, please clearly motivate the need for this in your contribution.
"},{"location":"adding_software/contribution_policy/#cpu_targets","title":"vii) CPU targets","text":"Software that is added to EESSI should work on all supported CPU targets.
Exceptions to this requirement are allowed if technical problems that can not be resolved with reasonable effort prevent the installation of the software for specific CPU targets.
"},{"location":"adding_software/contribution_policy/#testing","title":"viii) Testing","text":"We should be able to test the software installations via the EESSI test suite, in particular for software applications and user-facing tools.
Ideally one or more tests are available that verify that the software is functionally correct, and that it (still) performs well.
Tests that are run during the software installation procedure as performed by EasyBuild must pass. Exceptions can be made if only a small subset of tests fail for specific CPU targets, as long as these exceptions are tracked and an effort is made to assess the impact of those failing tests.
It should be possible to run a minimal smoke test for the software included in EESSI, for example using EasyBuild's --sanity-check-only
feature.
Note
The EESSI test suite is still in active development, and currently only has a minimal set of tests available.
When the test suite is more mature, this requirement will be enforced more strictly.
"},{"location":"adding_software/contribution_policy/#changelog","title":"Changelog","text":""},{"location":"adding_software/contribution_policy/#v010-9-nov-2023","title":"v0.1.0 (9 Nov 2023)","text":" - initial contribution policy
"},{"location":"adding_software/debugging_failed_builds/","title":"Debugging failed builds","text":"(for contributors + maintainers)
Unfortunately, software does not always build successfully. Since EESSI targets novel CPU architectures as well, build failures on such platforms are quite common, as the software and/or the software build systems have not always been adjusted to support these architectures yet.
In EESSI, all software packages are built by a bot. This is great for builds that complete successfully as we can build many software packages for a wide range of hardware with little human intervention. However, it does mean that you, as contributor, can not easily access the build directory and build logs to figure out build issues.
This page describes how you can interactively reproduce failed builds, so that you can more easily debug the issue.
Throughout this page, we will use this PR as an example. It intends to add LAMMPS to EESSI. Among other issues, it failed on a building Plumed.
"},{"location":"adding_software/debugging_failed_builds/#prerequisites","title":"Prerequisites","text":"You will need to have:
- Access to a machine with the hardware for which the build that you want to debug failed.
- On that machine, meet the requirements for running the EESSI container, as described on this page.
"},{"location":"adding_software/debugging_failed_builds/#preparing-the-environment","title":"Preparing the environment","text":"A number of steps are needed to create the same environment in which the bot builds.
- Fetching the feature branch from which you want to replicate a build.
- Starting a shell in the EESSI container.
- Start the Gentoo Prefix environment.
- Start the EESSI software environment.
- Configure EasyBuild.
"},{"location":"adding_software/debugging_failed_builds/#fetching-the-feature-branch","title":"Fetching the feature branch","text":"Looking at the example PR, we see the PR is created from this fork. First, we clone the fork, then checkout the feature branch (LAMMPS_23Jun2022
)
git clone https://github.com/laraPPr/software-layer/\ncd software-layer\ngit checkout LAMMPS_23Jun2022\n
Alternatively, if you already have a clone of the software-layer
you can add it as a new remote cd software-layer\ngit remote add laraPPr https://github.com/laraPPr/software-layer/\ngit fetch laraPPr\ngit checkout LAMMPS_23Jun2022\n
"},{"location":"adding_software/debugging_failed_builds/#starting-a-shell-in-the-eessi-container","title":"Starting a shell in the EESSI container","text":"Simply run the EESSI container (eessi_container.sh
), which should be in the root of the software-layer
repository
./eessi_container.sh --access rw\n
If you want to install NVIDIA GPU software, make sure to also add the --nvidia all
argument, to insure that your GPU drivers get mounted inside the container:
./eessi_container.sh --access rw --nvidia all\n
Note
You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS:
have been printed after the first prompt Apptainer>
was shown.
"},{"location":"adding_software/debugging_failed_builds/#more-efficient-approach-for-multiplecontinued-debugging-sessions","title":"More efficient approach for multiple/continued debugging sessions","text":"While the above works perfectly well, you might not be able to complete your debugging session in one go. With the above approach, several steps will just be repeated every time you start a debugging session:
- Downloading the container
- Installing
CUDA
in your host injections directory (only if you use the EESSI-install-software.sh
script, see below) - Installing all dependencies (before you get to the package that actually fails to build)
To avoid this, we create two directories. One holds the container & host_injections
, which are (typically) common between multiple PRs and thus you don't have to redownload the container / reinstall the host_injections
if you start working on another PR. The other will hold the PR-specific data: a tarball storing the software you'll build in your interactive debugging session. The paths we pick here are just example, you can pick any persistent, writeable location for this:
eessi_common_dir=${HOME}/eessi-manual-builds\neessi_pr_dir=${HOME}/pr360\n
Now, we start the container
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}\n
Here, the SINGULARITY_CACHEDIR
makes sure that if the container was already downloaded, and is present in the cache, it is not redownloaded. The host injections will just be picked up from ${eessi_common_dir}/host_injections
(if those were already installed before). And finally, the --save
makes sure that everything that you build in the container gets stored in a tarball as soon as you exit the container.
Note that the first exit
command will first make you exit the Gentoo prefix environment. Only the second will take you out of the container, and print where the tarball will be stored:
[EESSI 2023.06] $ exit\nlogout\nLeaving Gentoo Prefix with exit status 1\nApptainer> exit\nexit\nSaved contents of tmp directory '/tmp/eessi-debug.VgLf1v9gf0' to tarball '${HOME}/pr360/EESSI-1698056784.tgz' (to resume session add '--resume ${HOME}/pr360/EESSI-1698056784.tgz')\n
Note that the tarballs can be quite sizeable, so make sure to pick a filesystem where you have a large enough quotum.
Next time you want to continue investigating this issue, you can start the container with --resume DIR/TGZ
and continue where you left off, having all dependencies already built and available.
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}/EESSI-1698056784.tgz\n
For a detailed description on using the script eessi_container.sh
, see here.
Note
Reusing a previously downloaded container, or existing CUDA installation from a host_injections
is not be a good approach if those could be the cause of your issues. If you are unsure if this is the case, simply follow the regular approach to starting the EESSI container.
Note
It is recommended to clean the container cache and host_injections
directories every now and again, to make sure you pick up the latest changes for those two components.
"},{"location":"adding_software/debugging_failed_builds/#start-the-gentoo-prefix-environment","title":"Start the Gentoo Prefix environment","text":"The next step is to start the Gentoo Prefix environment.
Before we start, check the current values of ${EESSI_CVMFS_REPO}
and ${EESSI_VERSION}
so that you can reset them later:
echo ${EESSI_CVMFS_REPO}\necho ${EESSI_VERSION}\n
Then, we set EESSI_OS_TYPE
and EESSI_CPU_FAMILY
and run the startprefix
command to start the Gentoo Prefix environment:
export EESSI_OS_TYPE=linux # We only support Linux for now\nexport EESSI_CPU_FAMILY=$(uname -m)\n${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/startprefix\n
Now, reset the ${EESSI_CVMFS_REPO}
and ${EESSI_VERSION}
in your prefix environment with the initial values (printed in the echo statements above)
export EESSI_CVMFS_REPO=...\nexport EESSI_VERSION=...\n
Note
By activating the Gentoo Prefix environment, the system tools (e.g. ls
) you would normally use are now provided by Gentoo Prefix, instead of the container OS. E.g. running which ls
after starting the prefix environment as above will return /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/bin/ls
. This makes the builds completely independent from the container OS.
"},{"location":"adding_software/debugging_failed_builds/#building-for-the-generic-optimization-target","title":"Building for the generic
optimization target","text":"If you want to replicate a build with generic
optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic
) you will need to set the following environment variable:
export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-with-the-eessi-install-softwaresh-script","title":"Building software with the EESSI-install-software.sh
script","text":"The Automatic build and deploy bot installs software by executing the EESSI-install-software.sh
script. The advantage is that running this script is the closest you can get to replicating the bot's behaviour - and thus the failure. The downside is that if a PR adds a lot of software, it may take quite a long time to run - even if you might already know what the problematic software package is. In that case, you might be better off following the steps under Building software from an easystack file or Building an individual package.
Note that you could also combine approaches: first build everything using the EESSI-install-software.sh
script, until you reproduce the failure. Then, start making modifications (e.g. changes to the EasyConfig, patches, etc) and trying to rebuild that package individually to test your changes.
To build software using the EESSI-install-software.sh
script, you'll first need to get the diff file for the PR. This is used by the EESSI-install-software.sh
script to see what is changed in this PR - and thus what needs to be build for this PR. To download the diff for PR 360, we would e.g. do
wget https://github.com/EESSI/software-layer/pull/360.diff\n
Now, we run the EESSI-install-software.sh
script:
./EESSI-install-software.sh\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-from-an-easystack-file","title":"Building software from an easystack file","text":""},{"location":"adding_software/debugging_failed_builds/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"To activate the software environment, run
source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n
Note
If you get an error bash: /versions//init/bash: No such file or directory
, you forgot to reset the ${EESSI_CVFMS_REPO}
and ${EESSI_VERSION}
environment variables at the end of the previous step.
Note
If you want to build with generic optimization, you should run export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic
before sourcing.
For more info on starting the EESSI software environment, see here
"},{"location":"adding_software/debugging_failed_builds/#configure-easybuild","title":"Configure EasyBuild","text":"It is important that we configure EasyBuild in the same way as the bot uses it, with one small exceptions: our working directory will be different. Typically, that doesn't matter, but it's good to be aware of this one difference, in case you fail to replicate the build failure.
In this example, we create a unique temporary directory inside /tmp
to serve both as our workdir. Finally, we will source the configure_easybuild
script, which will configure EasyBuild by setting environment variables.
export WORKDIR=$(mktemp --directory --tmpdir=/tmp -t eessi-debug.XXXXXXXXXX)\nsource configure_easybuild\n
Among other things, the configure_easybuild
script sets the install path for EasyBuild to point to the correct installation directory in (to ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_SOFTWARE_SUBDIR}
). This is the exact same path the bot
uses to build, and uses a writeable overlay filesystem in the container to write to a path in /cvmfs
(which normally is read-only). This is identical to what the bot
does. Note
If you started the container using --resume, you may want WORKDIR to point to the workdir you created previously (instead of creating a new, temporary directory with mktemp
).
Note
If you want to replicate a build with generic
optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic
) you will need to set export EASYBUILD_OPTARCH=GENERIC
after sourcing configure_easybuild
.
Next, we need to determine the correct version of EasyBuild to load. Since the example PR changes the file eessi-2023.06-eb-4.8.1-2021b.yml
, this tells us the bot was using version 4.8.1
of EasyBuild to build this. Thus, we load that version of the EasyBuild module and check if everything was configured correctly:
module load EasyBuild/4.8.1\neb --show-config\n
You should get something similar to #\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath (E) = /tmp/easybuild/easybuild/build\ncontainerpath (E) = /tmp/easybuild/easybuild/containers\ndebug (E) = True\nexperimental (E) = True\nfilter-deps (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib, Yasm\nfilter-env-vars (E) = LD_LIBRARY_PATH\nhooks (E) = ${HOME}/software-layer/eb_hooks.py\nignore-osdeps (E) = True\ninstallpath (E) = /tmp/easybuild/software/linux/aarch64/neoverse_n1\nmodule-extensions (E) = True\npackagepath (E) = /tmp/easybuild/easybuild/packages\nprefix (E) = /tmp/easybuild/easybuild\nread-only-installdir (E) = True\nrepositorypath (E) = /tmp/easybuild/easybuild/ebfiles_repo\nrobot-paths (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/aarch64/neoverse_n1/software/EasyBuild/4.8.1/easybuild/easyconfigs\nrpath (E) = True\nsourcepath (E) = /tmp/easybuild/easybuild/sources:\nsysroot (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/aarch64\ntrace (E) = True\nzip-logs (E) = bzip2\n
"},{"location":"adding_software/debugging_failed_builds/#building-everything-in-the-easystack-file","title":"Building everything in the easystack file","text":"In our example PR, the easystack file that was changed was eessi-2023.06-eb-4.8.1-2021b.yml
. To build this, we run (in the directory that contains the checkout of this feature branch):
eb --easystack eessi-2023.06-eb-4.8.1-2021b.yml --robot\n
After some time, this build fails while trying to build Plumed
, and we can access the build log to look for clues on why it failed."},{"location":"adding_software/debugging_failed_builds/#building-an-individual-package","title":"Building an individual package","text":"First, prepare the environment by following the [Starting the EESSI software environment][#starting-the-eessi-software-environment] and Configure EasyBuild above.
In our example PR, the individual package that was added to eessi-2023.06-eb-4.8.1-2021b.yml
was LAMMPS-23Jun2022-foss-2021b-kokkos.eb
. To mimic the build behaviour, we'll also have to (re)use any options that are listed in the easystack file for LAMMPS-23Jun2022-foss-2021b-kokkos.eb
, in this case the option --from-pr 19000
. Thus, to build, we run:
eb LAMMPS-23Jun2022-foss-2021b-kokkos.eb --robot --from-pr 19000\n
After some time, this build fails while trying to build Plumed
, and we can access the build log to look for clues on why it failed. Note
While this might be faster than the easystack-based approach, this is not how the bot builds. So why it may reproduce the failure the bot encounters, it may not reproduce the bug at all (no failure) or run into different bugs. If you want to be sure, use the easystack-based approach.
"},{"location":"adding_software/debugging_failed_builds/#running-the-test-step","title":"Running the test step","text":"If you are still in the prefix layer (i.e. after previously building something), exit it first:
$ exit\nlogout\nLeaving Gentoo Prefix with exit status 0\n
Then, source the EESSI init script (again): Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} Apptainer>\n
Note
If you are in a SLURM environment, make sure to run for i in $(env | grep SLURM); do unset \"${i%=*}\"; done
to unset any SLURM environment variables. Failing to do so will cause mpirun
to pick up on these and e.g. infer how many slots are available. If you run into errors of the form \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\", you probably forgot this step.
Then, execute the run_tests.sh
script. We are assuming you are still in the root of the software-layer
repository that you cloned earlier:
./run_tests.sh\n
if all goes well, you should see (part of) the EESSI test suite being run by ReFrame, finishing with something like [ PASSED ] Ran X/Y test case(s) from Z check(s) (0 failure(s), 0 skipped, 0 aborted)\n
Note
If you are running on a system with hyperthreading enabled, you may still run into the \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\" error from mpirun
, because hardware threads are not considered to be slots by default by OpenMPIs mpirun
. In this case, run with OMPI_MCA_hwloc_base_use_hwthreads_as_cpus=1 ./run_tests.sh
(for OpenMPI 4.X) or PRTE_MCA_rmaps_default_mapping_policy=:hwtcpus ./run_tests.sh
(for OpenMPI 5.X).
"},{"location":"adding_software/debugging_failed_builds/#known-causes-of-issues-in-eessi","title":"Known causes of issues in EESSI","text":""},{"location":"adding_software/debugging_failed_builds/#the-custom-system-prefix-of-the-compatibility-layer","title":"The custom system prefix of the compatibility layer","text":"Some installations might expect the system root (sysroot, for short) to be in /
. However, in case of EESSI, we are building against the OS in the compatibility layer. Thus, our sysroot is something like ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}
. This can cause issues if installation procedures assume the sysroot is in /
.
One example of a sysroot issue was in installing wget
. The EasyConfig for wget
defined
# make sure pkg-config picks up system packages (OpenSSL & co)\npreconfigopts = \"export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl '\n
This will not work in EESSI, since the OpenSSL should be picked up from the compatibility layer. This was fixed by changing the EasyConfig to read preconfigopts = \"export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl\n
The %(sysroot)s
is a template value which EasyBuild will resolve to the value that has been configured in EasyBuild for sysroot
(it is one of the fields printed by eb --show-config
if a non-standard sysroot is configured). If you encounter issues where the installation can not find something that is normally provided by the OS (i.e. not one of the dependencies in your module environment), you may need to resort to a similar approach.
"},{"location":"adding_software/debugging_failed_builds/#the-writeable-overlay","title":"The writeable overlay","text":"The writeable overlay in the container is known to be a bit slow sometimes. Thus, we have seen tests failing because they exceed some timeout (e.g. this issue).
To investigate if the writeable overlay is somehow the issue, you can make sure the installation gets done somewhere else, e.g. in the temporary directory in /tmp
that you created as workdir. To do this, set
export EASYBUILD_INSTALLPATH=${WORKDIR}\n
after the step in which you have sourced the configure_easybuild
script. Note that in order to find (with module av
) any modules that get installed here, you will need to add this path to the MODULEPATH
:
module use ${EASYBUILD_INSTALLPATH}/modules/all\n
Then, retry building the software (as described above). If the build now succeeds, you know that indeed the writeable overlay caused the issue. We have to build in this writeable overlay when we do real deployments. Thus, if you hit such a timeout, try to see if you can (temporarily) modify the timeout value in the test so that it passes.
"},{"location":"adding_software/deploying_software/","title":"Deploying software","text":"(for maintainers)
"},{"location":"adding_software/deploying_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.
For more information, see the deploying section in the bot documentation.
Warning
Permission to trigger deployment of software installations must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/deploying_software/#merging-the-pull-request","title":"Merging the pull request","text":"You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.
Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.
Note
This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml
) that checks for missing installations, in the correct branch (for example 2023.06
) of the software-layer.
If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!
Warning
You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/deploying_software/#getting-help","title":"Getting help","text":"If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer
channel of the EESSI Slack.
"},{"location":"adding_software/opening_pr/","title":"Opening a pull request","text":"(for contributors)
To add software to EESSI, you should go through the semi-automatic software installation procedure by:
- 1) Making a pull request to the software-layer repository to (add or) update an easystack file that is used by EasyBuild to install software;
- 2) Instructing the bot to build the software on all supported CPU microarchitectures;
- 3) Instructing the bot to deploy the built software for ingestion into the EESSI repository;
- 4) Merging the pull request once CI indicates that the software has been ingested.
Warning
Make sure you are also aware of our contribution policy when adding software to EESSI.
"},{"location":"adding_software/opening_pr/#preparation","title":"Preparation","text":"Before you can make a pull request to the software-layer, you should fork the repository in your GitHub account.
For the remainder of these instructions, we assume that your GitHub account is @koala
.
Note
Don't forget to replace koala
with the name of your GitHub account in the commands below!
1) Clone the EESSI/software-layer repository:
mkdir EESSI\ncd EESSI\ngit clone https://github.com/EESSI/software-layer\ncd software-layer\n
2) Add your fork as a remote
git remote add koala git@github.com:koala/software-layer.git\n
3) Check out the branch that corresponds to the version of EESSI repository you want to add software to, for example 2023.06-software.eessi.io
:
git checkout 2023.06-software.eessi.io\n
Note
The commands above only need to be run once, to prepare your setup for making pull requests.
"},{"location":"adding_software/opening_pr/#software_layer_pull_request","title":"Creating a pull request","text":"1) Make sure that your 2023.06-software.eessi.io
branch in the checkout of the EESSI/software-layer
repository is up-to-date
cd EESSI/software-layer\ngit checkout 2023.06-software.eessi.io \ngit pull origin 2023.06-software.eessi.io \n
2) Create a new branch (use a sensible name, not example_branch
as below), and check it out
git checkout -b example_branch\n
3) Determine the correct easystack file to change, and add one or more lines to it that specify which easyconfigs should be installed
echo ' - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\n
4) Stage and commit the changes into your your branch with a sensible message
git add easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\ngit commit -m \"{2023.06}[GCC/12.3.0] example 1.2.3\"\n
5) Push your branch to your fork of the software-layer repository
git push koala example_branch\n
6) Go to the GitHub web interface to open your pull request, or use the helpful link that should show up in the output of the git push
command.
Make sure you target the correct branch: the one that corresponds to the version of EESSI you want to add software to (like 2023.06-software.eessi.io
).
If all goes well, one or more bots should almost instantly create a comment in your pull request with an overview of how it is configured - you will need this information when providing build instructions.
"},{"location":"adding_software/overview/","title":"Overview of adding software to EESSI","text":"We welcome contributions to the EESSI software stack. This page shows the procedure and provides links to the contribution policy and the technical details of making a contribution.
"},{"location":"adding_software/overview/#contribute-a-software-to-the-eessi-software-stack","title":"Contribute a software to the EESSI software stack","text":"\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n I(contributor) \n K(reviewer)\n A(Is there an EasyConfig for software) -->|No|B(Create an EasyConfig and contribute it to EasyBuild)\n A --> |Yes|D(Create a PR to software-layer)\n B --> C(Evaluate and merge pull request)\n C --> D\n D --> E(Review PR & trigger builds)\n E --> F(Debug build issue if needed)\n F --> G(Deploy tarballs to S3 bucket)\n G --> H(Ingest tarballs in EESSI by merging staging PRs)\n classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n class A,B,D,F,I blue\n click B \"https://easybuild.io/\"\n click D \"../opening_pr/\"\n click F \"../debugging_failed_builds/\"\n
"},{"location":"adding_software/overview/#contributing-a-reframe-test-to-the-eessi-test-suite","title":"Contributing a ReFrame test to the EESSI test suite","text":"Ideally, a contributor prepares a ReFrame test for the software to be added to the EESSI software stack.
\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n\n Z(Create ReFrame test & PR to tests-suite) --> Y(Review PR & run new test)\n Y --> W(Debug issue if needed) \n W --> V(Review PR if needed)\n V --> U(Merge PR)\n classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n class Z,W blue\n
"},{"location":"adding_software/overview/#more-about-adding-software-to-eessi","title":"More about adding software to EESSI","text":" - Contribution policy
- Opening a pull request (for contributors)
- Building software (for maintainers)
- Debugging failed builds (for contributors + maintainers)
- Deploying software (for maintainers)
If you need help with adding software to EESSI, please open a support request.
"},{"location":"available_software/overview/","title":"Available software (via modules)","text":"This table gives an overview of all the available software in EESSI per specific CPU target.
Name aarch64 x86_64 amd intel generic neoverse_n1 neoverse_v1 generic zen2 zen3 haswell skylake_avx512"},{"location":"available_software/detail/ALL/","title":"ALL","text":"A Load Balancing Library (ALL) aims to provide an easy way to include dynamicdomain-based load balancing into particle based simulation codes. The libraryis developed in the Simulation Laboratory Molecular Systems of the J\u00fclichSupercomputing Centre at Forschungszentrum J\u00fclich.
https://gitlab.jsc.fz-juelich.de/SLMS/loadbalancing
"},{"location":"available_software/detail/ALL/#available-modules","title":"Available modules","text":"The overview below shows which ALL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ALL, load one of these modules using a module load
command like:
module load ALL/0.9.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ALL/0.9.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/AOFlagger/","title":"AOFlagger","text":"The AOFlagger is a tool that can find and remove radio-frequency interference (RFI)in radio astronomical observations. It can make use of Lua scripts to make flagging strategies flexible,and the tools are applicable to a wide set of telescopes.
https://aoflagger.readthedocs.io/
"},{"location":"available_software/detail/AOFlagger/#available-modules","title":"Available modules","text":"The overview below shows which AOFlagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using AOFlagger, load one of these modules using a module load
command like:
module load AOFlagger/3.4.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 AOFlagger/3.4.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/ATK/","title":"ATK","text":"ATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
https://developer.gnome.org/atk/
"},{"location":"available_software/detail/ATK/#available-modules","title":"Available modules","text":"The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ATK, load one of these modules using a module load
command like:
module load ATK/2.38.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ATK/2.38.0-GCCcore-12.3.0 x x x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Abseil/","title":"Abseil","text":"Abseil is an open-source collection of C++ library code designed to augment theC++ standard library. The Abseil library code is collected from Google's ownC++ code base, has been extensively tested and used in production, and is thesame code we depend on in our daily coding lives.
https://abseil.io/
"},{"location":"available_software/detail/Abseil/#available-modules","title":"Available modules","text":"The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Abseil, load one of these modules using a module load
command like:
module load Abseil/20230125.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Abseil/20230125.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Armadillo/","title":"Armadillo","text":"Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
https://arma.sourceforge.net/
"},{"location":"available_software/detail/Armadillo/#available-modules","title":"Available modules","text":"The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Armadillo, load one of these modules using a module load
command like:
module load Armadillo/12.8.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Armadillo/12.8.0-foss-2023b x x x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x x x"},{"location":"available_software/detail/BLIS/","title":"BLIS","text":"BLIS is a portable software framework for instantiating high-performanceBLAS-like dense linear algebra libraries.
https://github.com/flame/blis/
"},{"location":"available_software/detail/BLIS/#available-modules","title":"Available modules","text":"The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BLIS, load one of these modules using a module load
command like:
module load BLIS/0.9.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLIS/0.9.0-GCC-13.2.0 x x x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/BWA/","title":"BWA","text":"Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
http://bio-bwa.sourceforge.net/
"},{"location":"available_software/detail/BWA/#available-modules","title":"Available modules","text":"The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BWA, load one of these modules using a module load
command like:
module load BWA/0.7.17-20220923-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BWA/0.7.17-20220923-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Bazel/","title":"Bazel","text":"Bazel is a build tool that builds code quickly and reliably.It is used to build the majority of Google's software.
https://bazel.io/
"},{"location":"available_software/detail/Bazel/#available-modules","title":"Available modules","text":"The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Bazel, load one of these modules using a module load
command like:
module load Bazel/6.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bazel/6.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/","title":"BeautifulSoup","text":"Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping.
https://www.crummy.com/software/BeautifulSoup
"},{"location":"available_software/detail/BeautifulSoup/#available-modules","title":"Available modules","text":"The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BeautifulSoup, load one of these modules using a module load
command like:
module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/#beautifulsoup4122-gcccore-1230","title":"BeautifulSoup/4.12.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
BeautifulSoup-4.12.2, soupsieve-2.4.1
"},{"location":"available_software/detail/Bison/","title":"Bison","text":"Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
https://www.gnu.org/software/bison
"},{"location":"available_software/detail/Bison/#available-modules","title":"Available modules","text":"The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Bison, load one of these modules using a module load
command like:
module load Bison/3.8.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bison/3.8.2-GCCcore-13.2.0 x x x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Boost.MPI/","title":"Boost.MPI","text":"Boost provides free peer-reviewed portable C++ source libraries.
https://www.boost.org/
"},{"location":"available_software/detail/Boost.MPI/#available-modules","title":"Available modules","text":"The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost.MPI, load one of these modules using a module load
command like:
module load Boost.MPI/1.82.0-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.MPI/1.82.0-gompi-2023a x x x x x x x x Boost.MPI/1.81.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/Boost.Python/","title":"Boost.Python","text":"Boost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
https://boostorg.github.io/python
"},{"location":"available_software/detail/Boost.Python/#available-modules","title":"Available modules","text":"The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost.Python, load one of these modules using a module load
command like:
module load Boost.Python/1.83.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.Python/1.83.0-GCC-13.2.0 x x x x x x x x"},{"location":"available_software/detail/Boost/","title":"Boost","text":"Boost provides free peer-reviewed portable C++ source libraries.
https://www.boost.org/
"},{"location":"available_software/detail/Boost/#available-modules","title":"Available modules","text":"The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost, load one of these modules using a module load
command like:
module load Boost/1.83.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost/1.83.0-GCC-13.2.0 x x x x x x x x Boost/1.82.0-GCC-12.3.0 x x x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Brotli/","title":"Brotli","text":"Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.The specification of the Brotli Compressed Data Format is defined in RFC 7932.
https://github.com/google/brotli
"},{"location":"available_software/detail/Brotli/#available-modules","title":"Available modules","text":"The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Brotli, load one of these modules using a module load
command like:
module load Brotli/1.1.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brotli/1.1.0-GCCcore-13.2.0 x x x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Brunsli/","title":"Brunsli","text":"Brunsli is a lossless JPEG repacking library.
https://github.com/google/brunsli/
"},{"location":"available_software/detail/Brunsli/#available-modules","title":"Available modules","text":"The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Brunsli, load one of these modules using a module load
command like:
module load Brunsli/0.1-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brunsli/0.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CDO/","title":"CDO","text":"CDO is a collection of command line Operators to manipulate and analyse Climate and NWP model Data.
https://code.zmaw.de/projects/cdo
"},{"location":"available_software/detail/CDO/#available-modules","title":"Available modules","text":"The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CDO, load one of these modules using a module load
command like:
module load CDO/2.2.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CDO/2.2.2-gompi-2023b x x x x x x x x CDO/2.2.2-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/CFITSIO/","title":"CFITSIO","text":"CFITSIO is a library of C and Fortran subroutines for reading and writing data files inFITS (Flexible Image Transport System) data format.
https://heasarc.gsfc.nasa.gov/fitsio/
"},{"location":"available_software/detail/CFITSIO/#available-modules","title":"Available modules","text":"The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CFITSIO, load one of these modules using a module load
command like:
module load CFITSIO/4.3.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CFITSIO/4.3.1-GCCcore-13.2.0 x x x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CGAL/","title":"CGAL","text":"The goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
https://www.cgal.org/
"},{"location":"available_software/detail/CGAL/#available-modules","title":"Available modules","text":"The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CGAL, load one of these modules using a module load
command like:
module load CGAL/5.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CGAL/5.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/CMake/","title":"CMake","text":"CMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
https://www.cmake.org
"},{"location":"available_software/detail/CMake/#available-modules","title":"Available modules","text":"The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CMake, load one of these modules using a module load
command like:
module load CMake/3.27.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CMake/3.27.6-GCCcore-13.2.0 x x x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CUDA-Samples/","title":"CUDA-Samples","text":"Samples for CUDA Developers which demonstrates features in CUDA Toolkit
https://github.com/NVIDIA/cuda-samples
"},{"location":"available_software/detail/CUDA-Samples/#available-modules","title":"Available modules","text":"The overview below shows which CUDA-Samples installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CUDA-Samples, load one of these modules using a module load
command like:
module load CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/CUDA/","title":"CUDA","text":"CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
https://developer.nvidia.com/cuda-toolkit
"},{"location":"available_software/detail/CUDA/#available-modules","title":"Available modules","text":"The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CUDA, load one of these modules using a module load
command like:
module load CUDA/12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA/12.1.1 x x x x x x x x"},{"location":"available_software/detail/Catch2/","title":"Catch2","text":"A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later
https://github.com/catchorg/Catch2
"},{"location":"available_software/detail/Catch2/#available-modules","title":"Available modules","text":"The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Catch2, load one of these modules using a module load
command like:
module load Catch2/2.13.9-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Catch2/2.13.9-GCCcore-13.2.0 x x x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Cbc/","title":"Cbc","text":"Cbc (Coin-or branch and cut) is an open-source mixed integer linear programmingsolver written in C++. It can be used as a callable library or using astand-alone executable.
https://github.com/coin-or/Cbc
"},{"location":"available_software/detail/Cbc/#available-modules","title":"Available modules","text":"The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Cbc, load one of these modules using a module load
command like:
module load Cbc/2.10.11-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cbc/2.10.11-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Cgl/","title":"Cgl","text":"The COIN-OR Cut Generation Library (Cgl) is a collection of cut generators thatcan be used with other COIN-OR packages that make use of cuts, such as, amongothers, the linear solver Clp or the mixed integer linear programming solversCbc or BCP. Cgl uses the abstract class OsiSolverInterface (see Osi) to use orcommunicate with a solver. It does not directly call a solver.
https://github.com/coin-or/Cgl
"},{"location":"available_software/detail/Cgl/#available-modules","title":"Available modules","text":"The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Cgl, load one of these modules using a module load
command like:
module load Cgl/0.60.8-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cgl/0.60.8-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Clp/","title":"Clp","text":"Clp (Coin-or linear programming) is an open-source linear programming solver.It is primarily meant to be used as a callable library, but a basic,stand-alone executable version is also available.
https://github.com/coin-or/Clp
"},{"location":"available_software/detail/Clp/#available-modules","title":"Available modules","text":"The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Clp, load one of these modules using a module load
command like:
module load Clp/1.17.9-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Clp/1.17.9-foss-2023a x x x x x x x x"},{"location":"available_software/detail/CoinUtils/","title":"CoinUtils","text":"CoinUtils (Coin-OR Utilities) is an open-source collection of classes andfunctions that are generally useful to more than one COIN-OR project.
https://github.com/coin-or/CoinUtils
"},{"location":"available_software/detail/CoinUtils/#available-modules","title":"Available modules","text":"The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CoinUtils, load one of these modules using a module load
command like:
module load CoinUtils/2.11.10-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CoinUtils/2.11.10-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/DB/","title":"DB","text":"Berkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
https://www.oracle.com/technetwork/products/berkeleydb
"},{"location":"available_software/detail/DB/#available-modules","title":"Available modules","text":"The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using DB, load one of these modules using a module load
command like:
module load DB/18.1.40-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB/18.1.40-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/DP3/","title":"DP3","text":"DP3: streaming processing pipeline for radio interferometric data.
https://dp3.readthedocs.io/
"},{"location":"available_software/detail/DP3/#available-modules","title":"Available modules","text":"The overview below shows which DP3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using DP3, load one of these modules using a module load
command like:
module load DP3/6.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 DP3/6.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/Doxygen/","title":"Doxygen","text":"Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
https://www.doxygen.org
"},{"location":"available_software/detail/Doxygen/#available-modules","title":"Available modules","text":"The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Doxygen, load one of these modules using a module load
command like:
module load Doxygen/1.9.8-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Doxygen/1.9.8-GCCcore-13.2.0 x x x x x x x x Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ELPA/","title":"ELPA","text":"Eigenvalue SoLvers for Petaflop-Applications.
https://elpa.mpcdf.mpg.de/
"},{"location":"available_software/detail/ELPA/#available-modules","title":"Available modules","text":"The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ELPA, load one of these modules using a module load
command like:
module load ELPA/2022.05.001-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ELPA/2022.05.001-foss-2022b x x x x x x x x"},{"location":"available_software/detail/ESPResSo/","title":"ESPResSo","text":"A software package for performing and analyzing scientific Molecular Dynamics simulations.
https://espressomd.org/wordpress
"},{"location":"available_software/detail/ESPResSo/#available-modules","title":"Available modules","text":"The overview below shows which ESPResSo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ESPResSo, load one of these modules using a module load
command like:
module load ESPResSo/4.2.1-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ESPResSo/4.2.1-foss-2023a x x x x x x x x"},{"location":"available_software/detail/EasyBuild/","title":"EasyBuild","text":"EasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
https://easybuilders.github.io/easybuild
"},{"location":"available_software/detail/EasyBuild/#available-modules","title":"Available modules","text":"The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using EasyBuild, load one of these modules using a module load
command like:
module load EasyBuild/4.9.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 EasyBuild/4.9.0 x x x x x x x x EasyBuild/4.8.2 x x x x x x x x"},{"location":"available_software/detail/Eigen/","title":"Eigen","text":"Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
https://eigen.tuxfamily.org
"},{"location":"available_software/detail/Eigen/#available-modules","title":"Available modules","text":"The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Eigen, load one of these modules using a module load
command like:
module load Eigen/3.4.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Eigen/3.4.0-GCCcore-13.2.0 x x x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/EveryBeam/","title":"EveryBeam","text":"Library that provides the antenna response pattern for several instruments,such as LOFAR (and LOBES), SKA (OSKAR), MWA, JVLA, etc.
https://everybeam.readthedocs.io/
"},{"location":"available_software/detail/EveryBeam/#available-modules","title":"Available modules","text":"The overview below shows which EveryBeam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using EveryBeam, load one of these modules using a module load
command like:
module load EveryBeam/0.5.2-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 EveryBeam/0.5.2-foss-2023b x x x x x x x x"},{"location":"available_software/detail/FFTW.MPI/","title":"FFTW.MPI","text":"FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.
https://www.fftw.org
"},{"location":"available_software/detail/FFTW.MPI/#available-modules","title":"Available modules","text":"The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFTW.MPI, load one of these modules using a module load
command like:
module load FFTW.MPI/3.3.10-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW.MPI/3.3.10-gompi-2023b x x x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/FFTW/","title":"FFTW","text":"FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.
https://www.fftw.org
"},{"location":"available_software/detail/FFTW/#available-modules","title":"Available modules","text":"The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFTW, load one of these modules using a module load
command like:
module load FFTW/3.3.10-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW/3.3.10-GCC-13.2.0 x x x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/FFmpeg/","title":"FFmpeg","text":"A complete, cross-platform solution to record, convert and stream audio and video.
https://www.ffmpeg.org/
"},{"location":"available_software/detail/FFmpeg/#available-modules","title":"Available modules","text":"The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFmpeg, load one of these modules using a module load
command like:
module load FFmpeg/6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFmpeg/6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/FlexiBLAS/","title":"FlexiBLAS","text":"FlexiBLAS is a wrapper library that enables the exchange of the BLAS and LAPACK implementationused by a program without recompiling or relinking it.
https://gitlab.mpi-magdeburg.mpg.de/software/flexiblas-release
"},{"location":"available_software/detail/FlexiBLAS/#available-modules","title":"Available modules","text":"The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FlexiBLAS, load one of these modules using a module load
command like:
module load FlexiBLAS/3.3.1-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/FriBidi/","title":"FriBidi","text":"The Free Implementation of the Unicode Bidirectional Algorithm.
https://github.com/fribidi/fribidi
"},{"location":"available_software/detail/FriBidi/#available-modules","title":"Available modules","text":"The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FriBidi, load one of these modules using a module load
command like:
module load FriBidi/1.0.12-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GCC/","title":"GCC","text":"The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
https://gcc.gnu.org/
"},{"location":"available_software/detail/GCC/#available-modules","title":"Available modules","text":"The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GCC, load one of these modules using a module load
command like:
module load GCC/13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCC/13.2.0 x x x x x x x x GCC/12.3.0 x x x x x x x x GCC/12.2.0 x x x x x x x x"},{"location":"available_software/detail/GCCcore/","title":"GCCcore","text":"The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
https://gcc.gnu.org/
"},{"location":"available_software/detail/GCCcore/#available-modules","title":"Available modules","text":"The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GCCcore, load one of these modules using a module load
command like:
module load GCCcore/13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCCcore/13.2.0 x x x x x x x x GCCcore/12.3.0 x x x x x x x x GCCcore/12.2.0 x x x x x x x x"},{"location":"available_software/detail/GDAL/","title":"GDAL","text":"GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
https://www.gdal.org
"},{"location":"available_software/detail/GDAL/#available-modules","title":"Available modules","text":"The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GDAL, load one of these modules using a module load
command like:
module load GDAL/3.6.2-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDAL/3.6.2-foss-2022b x x x x x x x x"},{"location":"available_software/detail/GDRCopy/","title":"GDRCopy","text":"A low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
https://github.com/NVIDIA/gdrcopy
"},{"location":"available_software/detail/GDRCopy/#available-modules","title":"Available modules","text":"The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GDRCopy, load one of these modules using a module load
command like:
module load GDRCopy/2.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDRCopy/2.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GEOS/","title":"GEOS","text":"GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
https://trac.osgeo.org/geos
"},{"location":"available_software/detail/GEOS/#available-modules","title":"Available modules","text":"The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GEOS, load one of these modules using a module load
command like:
module load GEOS/3.11.1-GCC-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GEOS/3.11.1-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GLPK/","title":"GLPK","text":"The GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
https://www.gnu.org/software/glpk/
"},{"location":"available_software/detail/GLPK/#available-modules","title":"Available modules","text":"The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GLPK, load one of these modules using a module load
command like:
module load GLPK/5.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLPK/5.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GLib/","title":"GLib","text":"GLib is one of the base libraries of the GTK+ project
https://www.gtk.org/
"},{"location":"available_software/detail/GLib/#available-modules","title":"Available modules","text":"The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GLib, load one of these modules using a module load
command like:
module load GLib/2.77.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLib/2.77.1-GCCcore-12.3.0 x x x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GMP/","title":"GMP","text":"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
https://gmplib.org/
"},{"location":"available_software/detail/GMP/#available-modules","title":"Available modules","text":"The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GMP, load one of these modules using a module load
command like:
module load GMP/6.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GMP/6.2.1-GCCcore-12.3.0 x x x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GObject-Introspection/","title":"GObject-Introspection","text":"GObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
https://gi.readthedocs.io/en/latest/
"},{"location":"available_software/detail/GObject-Introspection/#available-modules","title":"Available modules","text":"The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GObject-Introspection, load one of these modules using a module load
command like:
module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GSL/","title":"GSL","text":"The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
https://www.gnu.org/software/gsl/
"},{"location":"available_software/detail/GSL/#available-modules","title":"Available modules","text":"The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GSL, load one of these modules using a module load
command like:
module load GSL/2.7-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GSL/2.7-GCC-13.2.0 x x x x x x x x GSL/2.7-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GTK3/","title":"GTK3","text":"GTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
https://developer.gnome.org/gtk3/stable/
"},{"location":"available_software/detail/GTK3/#available-modules","title":"Available modules","text":"The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GTK3, load one of these modules using a module load
command like:
module load GTK3/3.24.37-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GTK3/3.24.37-GCCcore-12.3.0 x x x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Gdk-Pixbuf/","title":"Gdk-Pixbuf","text":"The Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
https://docs.gtk.org/gdk-pixbuf/
"},{"location":"available_software/detail/Gdk-Pixbuf/#available-modules","title":"Available modules","text":"The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Gdk-Pixbuf, load one of these modules using a module load
command like:
module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Ghostscript/","title":"Ghostscript","text":"Ghostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
https://ghostscript.com
"},{"location":"available_software/detail/Ghostscript/#available-modules","title":"Available modules","text":"The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Ghostscript, load one of these modules using a module load
command like:
module load Ghostscript/10.01.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GitPython/","title":"GitPython","text":"GitPython is a python library used to interact with Git repositories
https://gitpython.readthedocs.org
"},{"location":"available_software/detail/GitPython/#available-modules","title":"Available modules","text":"The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GitPython, load one of these modules using a module load
command like:
module load GitPython/3.1.40-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GitPython/3.1.40-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GitPython/#gitpython3140-gcccore-1230","title":"GitPython/3.1.40-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
gitdb-4.0.11, GitPython-3.1.40, smmap-5.0.1
"},{"location":"available_software/detail/HDF/","title":"HDF","text":"HDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
https://www.hdfgroup.org/products/hdf4/
"},{"location":"available_software/detail/HDF/#available-modules","title":"Available modules","text":"The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HDF, load one of these modules using a module load
command like:
module load HDF/4.2.15-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF/4.2.15-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/HDF5/","title":"HDF5","text":"HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
https://portal.hdfgroup.org/display/support
"},{"location":"available_software/detail/HDF5/#available-modules","title":"Available modules","text":"The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HDF5, load one of these modules using a module load
command like:
module load HDF5/1.14.3-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF5/1.14.3-gompi-2023b x x x x x x x x HDF5/1.14.0-gompi-2023a x x x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/HarfBuzz/","title":"HarfBuzz","text":"HarfBuzz is an OpenType text shaping engine.
https://www.freedesktop.org/wiki/Software/HarfBuzz
"},{"location":"available_software/detail/HarfBuzz/#available-modules","title":"Available modules","text":"The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HarfBuzz, load one of these modules using a module load
command like:
module load HarfBuzz/5.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/HepMC3/","title":"HepMC3","text":"HepMC is a standard for storing Monte Carlo event data.
http://hepmc.web.cern.ch/hepmc/
"},{"location":"available_software/detail/HepMC3/#available-modules","title":"Available modules","text":"The overview below shows which HepMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HepMC3, load one of these modules using a module load
command like:
module load HepMC3/3.2.6-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HepMC3/3.2.6-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Highway/","title":"Highway","text":"Highway is a C++ library for SIMD (Single Instruction, Multiple Data), i.e. applying the sameoperation to 'lanes'.
https://github.com/google/highway
"},{"location":"available_software/detail/Highway/#available-modules","title":"Available modules","text":"The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Highway, load one of these modules using a module load
command like:
module load Highway/1.0.3-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Highway/1.0.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ICU/","title":"ICU","text":"ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
https://icu.unicode.org
"},{"location":"available_software/detail/ICU/#available-modules","title":"Available modules","text":"The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ICU, load one of these modules using a module load
command like:
module load ICU/74.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ICU/74.1-GCCcore-13.2.0 x x x x x x x x ICU/73.2-GCCcore-12.3.0 x x x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/IDG/","title":"IDG","text":"Image Domain Gridding (IDG) is a fast method for convolutional resampling (gridding/degridding)of radio astronomical data (visibilities). Direction dependent effects (DDEs) or A-tems can be appliedin the gridding process.The algorithm is described in \"Image Domain Gridding: a fast method for convolutional resampling of visibilities\",Van der Tol (2018).The implementation is described in \"Radio-astronomical imaging on graphics processors\", Veenboer (2020).Please cite these papers in publications using IDG.
https://idg.readthedocs.io/
"},{"location":"available_software/detail/IDG/#available-modules","title":"Available modules","text":"The overview below shows which IDG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using IDG, load one of these modules using a module load
command like:
module load IDG/1.2.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 IDG/1.2.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/IPython/","title":"IPython","text":"IPython provides a rich architecture for interactive computing with: Powerful interactive shells (terminal and Qt-based). A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into your own projects. Easy to use, high performance tools for parallel computing.
https://ipython.org/index.html
"},{"location":"available_software/detail/IPython/#available-modules","title":"Available modules","text":"The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using IPython, load one of these modules using a module load
command like:
module load IPython/8.14.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 IPython/8.14.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/IPython/#ipython8140-gcccore-1230","title":"IPython/8.14.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
asttokens-2.2.1, backcall-0.2.0, executing-1.2.0, ipython-8.14.0, jedi-0.19.0, matplotlib-inline-0.1.6, parso-0.8.3, pickleshare-0.7.5, prompt_toolkit-3.0.39, pure_eval-0.2.2, stack_data-0.6.2, traitlets-5.9.0
"},{"location":"available_software/detail/ImageMagick/","title":"ImageMagick","text":"ImageMagick is a software suite to create, edit, compose, or convert bitmap images
https://www.imagemagick.org/
"},{"location":"available_software/detail/ImageMagick/#available-modules","title":"Available modules","text":"The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ImageMagick, load one of these modules using a module load
command like:
module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Imath/","title":"Imath","text":"Imath is a C++ and python library of 2D and 3D vector, matrix, and math operations for computer graphics
https://imath.readthedocs.io/en/latest/
"},{"location":"available_software/detail/Imath/#available-modules","title":"Available modules","text":"The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Imath, load one of these modules using a module load
command like:
module load Imath/3.1.6-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Imath/3.1.6-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/JasPer/","title":"JasPer","text":"The JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
https://www.ece.uvic.ca/~frodo/jasper/
"},{"location":"available_software/detail/JasPer/#available-modules","title":"Available modules","text":"The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JasPer, load one of these modules using a module load
command like:
module load JasPer/4.0.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JasPer/4.0.0-GCCcore-13.2.0 x x x x x x x x JasPer/4.0.0-GCCcore-12.3.0 x x x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Java/","title":"Java","text":""},{"location":"available_software/detail/Java/#available-modules","title":"Available modules","text":"The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Java, load one of these modules using a module load
command like:
module load Java/11.0.20\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Java/11.0.20 x x x x x x x x Java/11(@Java/11.0.20) x x x x x x x x"},{"location":"available_software/detail/JsonCpp/","title":"JsonCpp","text":"JsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
https://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html
"},{"location":"available_software/detail/JsonCpp/#available-modules","title":"Available modules","text":"The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JsonCpp, load one of these modules using a module load
command like:
module load JsonCpp/1.9.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/JupyterLab/","title":"JupyterLab","text":"JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook.
https://jupyter.org/
"},{"location":"available_software/detail/JupyterLab/#available-modules","title":"Available modules","text":"The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JupyterLab, load one of these modules using a module load
command like:
module load JupyterLab/4.0.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/JupyterLab/#jupyterlab405-gcccore-1230","title":"JupyterLab/4.0.5-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
async-lru-2.0.4, json5-0.9.14, jupyter-lsp-2.2.0, jupyterlab-4.0.5, jupyterlab_server-2.24.0
"},{"location":"available_software/detail/JupyterNotebook/","title":"JupyterNotebook","text":"The Jupyter Notebook is the original web application for creating and sharing computational documents. It offers a simple, streamlined, document-centric experience.
https://jupyter.org/
"},{"location":"available_software/detail/JupyterNotebook/#available-modules","title":"Available modules","text":"The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JupyterNotebook, load one of these modules using a module load
command like:
module load JupyterNotebook/7.0.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LAME/","title":"LAME","text":"LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
http://lame.sourceforge.net/
"},{"location":"available_software/detail/LAME/#available-modules","title":"Available modules","text":"The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LAME, load one of these modules using a module load
command like:
module load LAME/3.100-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAME/3.100-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LAMMPS/","title":"LAMMPS","text":"LAMMPS is a classical molecular dynamics code, and an acronymfor Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS haspotentials for solid-state materials (metals, semiconductors) and soft matter(biomolecules, polymers) and coarse-grained or mesoscopic systems. It can beused to model atoms or, more generically, as a parallel particle simulator atthe atomic, meso, or continuum scale. LAMMPS runs on single processors or inparallel using message-passing techniques and a spatial-decomposition of thesimulation domain. The code is designed to be easy to modify or extend with newfunctionality.
https://www.lammps.org
"},{"location":"available_software/detail/LAMMPS/#available-modules","title":"Available modules","text":"The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LAMMPS, load one of these modules using a module load
command like:
module load LAMMPS/2Aug2023_update2-foss-2023a-kokkos\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAMMPS/2Aug2023_update2-foss-2023a-kokkos x x x x x x x x"},{"location":"available_software/detail/LERC/","title":"LERC","text":"LERC is an open-source image or raster format which supports rapid encoding and decodingfor any pixel type (not just RGB or Byte). Users set the maximum compression error per pixel while encoding,so the precision of the original input image is preserved (within user defined error bounds).
https://github.com/Esri/lerc
"},{"location":"available_software/detail/LERC/#available-modules","title":"Available modules","text":"The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LERC, load one of these modules using a module load
command like:
module load LERC/4.0.0-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LERC/4.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LHAPDF/","title":"LHAPDF","text":"Les Houches Parton Density FunctionLHAPDF is the standard tool for evaluating parton distribution functions (PDFs) in high-energy physics.
http://lhapdf.hepforge.org/
"},{"location":"available_software/detail/LHAPDF/#available-modules","title":"Available modules","text":"The overview below shows which LHAPDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LHAPDF, load one of these modules using a module load
command like:
module load LHAPDF/6.5.4-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LHAPDF/6.5.4-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LLVM/","title":"LLVM","text":"The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation (\"LLVM IR\"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
https://llvm.org/
"},{"location":"available_software/detail/LLVM/#available-modules","title":"Available modules","text":"The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LLVM, load one of these modules using a module load
command like:
module load LLVM/16.0.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LLVM/16.0.6-GCCcore-12.3.0 x x x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LibTIFF/","title":"LibTIFF","text":"tiff: Library and tools for reading and writing TIFF data files
https://libtiff.gitlab.io/libtiff/
"},{"location":"available_software/detail/LibTIFF/#available-modules","title":"Available modules","text":"The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LibTIFF, load one of these modules using a module load
command like:
module load LibTIFF/4.6.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LittleCMS/","title":"LittleCMS","text":"Little CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
https://www.littlecms.com/
"},{"location":"available_software/detail/LittleCMS/#available-modules","title":"Available modules","text":"The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LittleCMS, load one of these modules using a module load
command like:
module load LittleCMS/2.15-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LittleCMS/2.15-GCCcore-12.3.0 x x x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LoopTools/","title":"LoopTools","text":"LoopTools is a package for evaluation of scalar and tensor one-loop integrals.It is based on the FF package by G.J. van Oldenborgh.
https://feynarts.de/looptools/
"},{"location":"available_software/detail/LoopTools/#available-modules","title":"Available modules","text":"The overview below shows which LoopTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LoopTools, load one of these modules using a module load
command like:
module load LoopTools/2.15-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LoopTools/2.15-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Lua/","title":"Lua","text":"Lua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
https://www.lua.org/
"},{"location":"available_software/detail/Lua/#available-modules","title":"Available modules","text":"The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Lua, load one of these modules using a module load
command like:
module load Lua/5.4.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Lua/5.4.6-GCCcore-13.2.0 x x x x x x x x Lua/5.4.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MDI/","title":"MDI","text":"The MolSSI Driver Interface (MDI) project provides a standardized API for fast, on-the-fly communication between computational chemistry codes. This greatly simplifies the process of implementing methods that require the cooperation of multiple software packages and enables developers to write a single implementation that works across many different codes. The API is sufficiently general to support a wide variety of techniques, including QM/MM, ab initio MD, machine learning, advanced sampling, and path integral MD, while also being straightforwardly extensible. Communication between codes is handled by the MDI Library, which enables tight coupling between codes using either the MPI or TCP/IP methods.
https://github.com/MolSSI-MDI/MDI_Library
"},{"location":"available_software/detail/MDI/#available-modules","title":"Available modules","text":"The overview below shows which MDI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MDI, load one of these modules using a module load
command like:
module load MDI/1.4.26-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDI/1.4.26-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/METIS/","title":"METIS","text":"METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
http://glaros.dtc.umn.edu/gkhome/metis/metis/overview
"},{"location":"available_software/detail/METIS/#available-modules","title":"Available modules","text":"The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using METIS, load one of these modules using a module load
command like:
module load METIS/5.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 METIS/5.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MPC/","title":"MPC","text":"Gnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
http://www.multiprecision.org/
"},{"location":"available_software/detail/MPC/#available-modules","title":"Available modules","text":"The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MPC, load one of these modules using a module load
command like:
module load MPC/1.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPC/1.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MPFR/","title":"MPFR","text":"The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
https://www.mpfr.org
"},{"location":"available_software/detail/MPFR/#available-modules","title":"Available modules","text":"The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MPFR, load one of these modules using a module load
command like:
module load MPFR/4.2.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPFR/4.2.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MUMPS/","title":"MUMPS","text":"A parallel sparse direct solver
https://graal.ens-lyon.fr/MUMPS/
"},{"location":"available_software/detail/MUMPS/#available-modules","title":"Available modules","text":"The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MUMPS, load one of these modules using a module load
command like:
module load MUMPS/5.6.1-foss-2023a-metis\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MUMPS/5.6.1-foss-2023a-metis x x x x x x x x"},{"location":"available_software/detail/Mako/","title":"Mako","text":"A super-fast templating language that borrows the best ideas from the existing templating languages
https://www.makotemplates.org
"},{"location":"available_software/detail/Mako/#available-modules","title":"Available modules","text":"The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Mako, load one of these modules using a module load
command like:
module load Mako/1.2.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mako/1.2.4-GCCcore-12.3.0 x x x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Mako/#mako124-gcccore-1230","title":"Mako/1.2.4-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
Mako-1.2.4, MarkupSafe-2.1.3
"},{"location":"available_software/detail/Mesa/","title":"Mesa","text":"Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
https://www.mesa3d.org/
"},{"location":"available_software/detail/Mesa/#available-modules","title":"Available modules","text":"The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Mesa, load one of these modules using a module load
command like:
module load Mesa/23.1.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mesa/23.1.4-GCCcore-12.3.0 x x x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Meson/","title":"Meson","text":"Meson is a cross-platform build system designed to be both as fast and as user friendly as possible.
https://mesonbuild.com
"},{"location":"available_software/detail/Meson/#available-modules","title":"Available modules","text":"The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Meson, load one of these modules using a module load
command like:
module load Meson/1.2.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Meson/1.2.3-GCCcore-13.2.0 x x x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NASM/","title":"NASM","text":"NASM: General-purpose x86 assembler
https://www.nasm.us/
"},{"location":"available_software/detail/NASM/#available-modules","title":"Available modules","text":"The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NASM, load one of these modules using a module load
command like:
module load NASM/2.16.01-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NASM/2.16.01-GCCcore-13.2.0 x x x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NCCL/","title":"NCCL","text":"The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collectivecommunication primitives that are performance optimized for NVIDIA GPUs.
https://developer.nvidia.com/nccl
"},{"location":"available_software/detail/NCCL/#available-modules","title":"Available modules","text":"The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NCCL, load one of these modules using a module load
command like:
module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/NSPR/","title":"NSPR","text":"Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR
"},{"location":"available_software/detail/NSPR/#available-modules","title":"Available modules","text":"The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NSPR, load one of these modules using a module load
command like:
module load NSPR/4.35-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSPR/4.35-GCCcore-12.3.0 x x x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NSS/","title":"NSS","text":"Network Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS
"},{"location":"available_software/detail/NSS/#available-modules","title":"Available modules","text":"The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NSS, load one of these modules using a module load
command like:
module load NSS/3.89.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSS/3.89.1-GCCcore-12.3.0 x x x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Nextflow/","title":"Nextflow","text":"Nextflow is a reactive workflow framework and a programming DSL that eases writing computational pipelines with complex data
https://www.nextflow.io/
"},{"location":"available_software/detail/Nextflow/#available-modules","title":"Available modules","text":"The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Nextflow, load one of these modules using a module load
command like:
module load Nextflow/23.10.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Nextflow/23.10.0 x x x x x x x x"},{"location":"available_software/detail/Ninja/","title":"Ninja","text":"Ninja is a small build system with a focus on speed.
https://ninja-build.org/
"},{"location":"available_software/detail/Ninja/#available-modules","title":"Available modules","text":"The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Ninja, load one of these modules using a module load
command like:
module load Ninja/1.11.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ninja/1.11.1-GCCcore-13.2.0 x x x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OSU-Micro-Benchmarks/","title":"OSU-Micro-Benchmarks","text":"OSU Micro-Benchmarks
https://mvapich.cse.ohio-state.edu/benchmarks/
"},{"location":"available_software/detail/OSU-Micro-Benchmarks/#available-modules","title":"Available modules","text":"The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OSU-Micro-Benchmarks, load one of these modules using a module load
command like:
module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x x x OSU-Micro-Benchmarks/7.2-gompi-2023a-CUDA-12.1.1 x x x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/OpenBLAS/","title":"OpenBLAS","text":"OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
http://www.openblas.net/
"},{"location":"available_software/detail/OpenBLAS/#available-modules","title":"Available modules","text":"The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenBLAS, load one of these modules using a module load
command like:
module load OpenBLAS/0.3.24-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenEXR/","title":"OpenEXR","text":"OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
https://www.openexr.com/
"},{"location":"available_software/detail/OpenEXR/#available-modules","title":"Available modules","text":"The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenEXR, load one of these modules using a module load
command like:
module load OpenEXR/3.1.5-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenFOAM/","title":"OpenFOAM","text":"OpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
https://www.openfoam.org/
"},{"location":"available_software/detail/OpenFOAM/#available-modules","title":"Available modules","text":"The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenFOAM, load one of these modules using a module load
command like:
module load OpenFOAM/11-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenFOAM/11-foss-2023a x x x x x x x x"},{"location":"available_software/detail/OpenJPEG/","title":"OpenJPEG","text":"OpenJPEG is an open-source JPEG 2000 codec written in C language. It has been developed in order to promote the use of JPEG 2000, a still-image compression standard from the Joint Photographic Experts Group (JPEG). Since may 2015, it is officially recognized by ISO/IEC and ITU-T as a JPEG 2000 Reference Software.
https://www.openjpeg.org/
"},{"location":"available_software/detail/OpenJPEG/#available-modules","title":"Available modules","text":"The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenJPEG, load one of these modules using a module load
command like:
module load OpenJPEG/2.5.0-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenMPI/","title":"OpenMPI","text":"The Open MPI Project is an open source MPI-3 implementation.
https://www.open-mpi.org/
"},{"location":"available_software/detail/OpenMPI/#available-modules","title":"Available modules","text":"The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenMPI, load one of these modules using a module load
command like:
module load OpenMPI/4.1.6-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenMPI/4.1.6-GCC-13.2.0 x x x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenPGM/","title":"OpenPGM","text":"OpenPGM is an open source implementation of the Pragmatic General Multicast (PGM) specification in RFC 3208 available at www.ietf.org. PGM is a reliable and scalable multicast protocol that enables receivers to detect loss, request retransmission of lost data, or notify an application of unrecoverable loss. PGM is a receiver-reliable protocol, which means the receiver is responsible for ensuring all data is received, absolving the sender of reception responsibility.
https://code.google.com/p/openpgm/
"},{"location":"available_software/detail/OpenPGM/#available-modules","title":"Available modules","text":"The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenPGM, load one of these modules using a module load
command like:
module load OpenPGM/5.2.122-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/OpenSSL/","title":"OpenSSL","text":"The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolchain implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library.
https://www.openssl.org/
"},{"location":"available_software/detail/OpenSSL/#available-modules","title":"Available modules","text":"The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenSSL, load one of these modules using a module load
command like:
module load OpenSSL/1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenSSL/1.1 x x x x x x x x"},{"location":"available_software/detail/Osi/","title":"Osi","text":"Osi (Open Solver Interface) provides an abstract base class to a generic linearprogramming (LP) solver, along with derived classes for specific solvers. Manyapplications may be able to use the Osi to insulate themselves from a specificLP solver. That is, programs written to the OSI standard may be linked to anysolver with an OSI interface and should produce correct results. The OSI hasbeen significantly extended compared to its first incarnation. Currently, theOSI supports linear programming solvers and has rudimentary support for integerprogramming.
https://github.com/coin-or/Osi
"},{"location":"available_software/detail/Osi/#available-modules","title":"Available modules","text":"The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Osi, load one of these modules using a module load
command like:
module load Osi/0.108.9-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Osi/0.108.9-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PCRE/","title":"PCRE","text":"The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
https://www.pcre.org/
"},{"location":"available_software/detail/PCRE/#available-modules","title":"Available modules","text":"The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PCRE, load one of these modules using a module load
command like:
module load PCRE/8.45-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE/8.45-GCCcore-13.2.0 x x x x x x x x PCRE/8.45-GCCcore-12.3.0 x x x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PCRE2/","title":"PCRE2","text":"The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
https://www.pcre.org/
"},{"location":"available_software/detail/PCRE2/#available-modules","title":"Available modules","text":"The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PCRE2, load one of these modules using a module load
command like:
module load PCRE2/10.42-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE2/10.42-GCCcore-12.3.0 x x x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PGPLOT/","title":"PGPLOT","text":"The PGPLOT Graphics Subroutine Library is a Fortran- or C-callable,device-independent graphics package for making simple scientific graphs. It is intendedfor making graphical images of publication quality with minimum effort on the part of the user. For most applications, the program can be device-independent, and the outputcan be directed to the appropriate device at run time.
https://sites.astro.caltech.edu/~tjp/pgplot/
"},{"location":"available_software/detail/PGPLOT/#available-modules","title":"Available modules","text":"The overview below shows which PGPLOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PGPLOT, load one of these modules using a module load
command like:
module load PGPLOT/5.2.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PGPLOT/5.2.2-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/PLUMED/","title":"PLUMED","text":"PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
https://www.plumed.org
"},{"location":"available_software/detail/PLUMED/#available-modules","title":"Available modules","text":"The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PLUMED, load one of these modules using a module load
command like:
module load PLUMED/2.9.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLUMED/2.9.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PLY/","title":"PLY","text":"PLY is yet another implementation of lex and yacc for Python.
https://www.dabeaz.com/ply/
"},{"location":"available_software/detail/PLY/#available-modules","title":"Available modules","text":"The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PLY, load one of these modules using a module load
command like:
module load PLY/3.11-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLY/3.11-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PMIx/","title":"PMIx","text":"Process Management for Exascale EnvironmentsPMI Exascale (PMIx) represents an attempt toprovide an extended version of the PMI standard specifically designedto support clusters up to and including exascale sizes. The overallobjective of the project is not to branch the existing pseudo-standarddefinitions - in fact, PMIx fully supports both of the existing PMI-1and PMI-2 APIs - but rather to (a) augment and extend those APIs toeliminate some current restrictions that impact scalability, and (b)provide a reference implementation of the PMI-server that demonstratesthe desired level of scalability.
https://pmix.org/
"},{"location":"available_software/detail/PMIx/#available-modules","title":"Available modules","text":"The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PMIx, load one of these modules using a module load
command like:
module load PMIx/4.2.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PMIx/4.2.6-GCCcore-13.2.0 x x x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PROJ/","title":"PROJ","text":"Program proj is a standard Unix filter function which convertsgeographic longitude and latitude coordinates into cartesian coordinates
https://proj.org
"},{"location":"available_software/detail/PROJ/#available-modules","title":"Available modules","text":"The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PROJ, load one of these modules using a module load
command like:
module load PROJ/9.3.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PROJ/9.3.1-GCCcore-13.2.0 x x x x x x x x PROJ/9.2.0-GCCcore-12.3.0 x x x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Pango/","title":"Pango","text":"Pango is a library for laying out and rendering of text, with an emphasis on internationalization.Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in thecontext of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
https://www.pango.org/
"},{"location":"available_software/detail/Pango/#available-modules","title":"Available modules","text":"The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pango, load one of these modules using a module load
command like:
module load Pango/1.50.14-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pango/1.50.14-GCCcore-12.3.0 x x x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ParaView/","title":"ParaView","text":"ParaView is a scientific parallel visualizer.
https://www.paraview.org
"},{"location":"available_software/detail/ParaView/#available-modules","title":"Available modules","text":"The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ParaView, load one of these modules using a module load
command like:
module load ParaView/5.11.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParaView/5.11.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Perl/","title":"Perl","text":"Larry Wall's Practical Extraction and Report LanguageIncludes a small selection of extra CPAN packages for core functionality.
https://www.perl.org/
"},{"location":"available_software/detail/Perl/#available-modules","title":"Available modules","text":"The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Perl, load one of these modules using a module load
command like:
module load Perl/5.38.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl/5.38.0-GCCcore-13.2.0 x x x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Perl/#perl5380-gcccore-1320","title":"Perl/5.38.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21
"},{"location":"available_software/detail/Perl/#perl5361-gcccore-1230","title":"Perl/5.36.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21
"},{"location":"available_software/detail/Perl/#perl5360-gcccore-1220","title":"Perl/5.36.0-GCCcore-12.2.0","text":"This is a list of extensions included in the module:
Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.334, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.007, Business::ISBN::Data-20210112.006, Canary::Stability-2013, Capture::Tiny-0.48, Carp-1.50, Carp::Clan-6.08, Carp::Heavy-1.50, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Clone-0.45, Clone::Choose-0.010, common::sense-3.75, Config::General-2.65, Config::INI-0.027, Config::MVP-2.200012, Config::Simple-4.58, Config::Tiny-2.28, constant-1.33, CPAN::Meta::Check-0.014, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper-2.183, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.112, Data::Section-0.200007, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.58, DateTime::Locale-1.36, DateTime::TimeZone-2.53, DateTime::Tiny-1.07, DBD::CSV-0.59, DBD::SQLite-1.70, DBI-1.643, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.025, Email::Date::Format-1.005, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter-5.74, Exporter::Declare-0.114, Exporter::Tiny-1.004000, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.64, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::Path-2.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.013, File::Spec-3.75, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long-2.52, Getopt::Long::Descriptive-0.110, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9725, Graph::ReadWrite-2.10, Hash::Merge-0.302, Heap-0.80, HTML::Entities::Interpolate-1.10, HTML::Form-6.10, HTML::Parser-3.78, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::Cookies-6.10, HTTP::Daemon-6.14, HTTP::Date-6.05, HTTP::Negotiate-6.01, HTTP::Request-6.37, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::HTML-1.004, IO::Socket::SSL-2.075, IO::String-1.08, IO::Stringy-2.113, IO::Tty-1.16, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.09, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.58, List::Util-1.63, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.70, Log::Dispatchouli-2.023, Log::Handler-0.90, Log::Log4perl-1.56, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.33, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.67, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.879, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-1.013.1, MIME::Lite-3.033, MIME::Types-2.22, Mixin::Linewise::Readers-0.110, Mock::Quick-1.111, Module::Build-0.4231, Module::Build::Tiny-0.039, Module::Build::XSUtil-0.19, Module::CoreList-5.20220820, Module::Implementation-0.09, Module::Install-1.19, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000037, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005004, Moose-2.2201, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.006, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.201, MooseX::Types-0.50, MooseX::Types::Perl-0.101343, Mouse-v2.5.10, Mozilla::CA-20211001, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.14, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.17, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.30, Params::ValidationCompiler-0.30, parent-0.238, Parse::RecDescent-1.967015, Path::Tiny-0.124, PDF::API2-2.043, Perl::OSType-1.010, PerlIO::utf8_strict-0.009, Pod::Elemental-0.103005, Pod::Escapes-1.07, Pod::Eventual-0.094002, Pod::LaTeX-0.61, Pod::Man-4.14, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.43, Pod::Weaver-4.018, Readonly-2.05, Regexp::Common-2017060201, Role::HasMessage-0.006, Role::Identifiable::HasIdent-0.008, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104002, Specio-0.48, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Flogger-1.101245, String::Print-0.94, String::RewritePrefix-0.008, String::Truncate-1.100602, Sub::Exporter-0.988, Sub::Exporter::ForMethods-0.100054, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.928, Sub::Name-0.26, Sub::Quote-2.006006, Sub::Uplevel-0.2800, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Unknown-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.42, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000145, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.130, Test::Differences-0.69, Test::Exception-0.43, Test::Fatal-0.016, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More-1.302191, Test::More::UTF8-0.05, Test::Most-0.37, Test::Needs-0.002009, Test::NoWarnings-1.06, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302191, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.20, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.48, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::ParseWords-3.31, Text::Soundex-3.05, Text::Table-1.134, Text::Template-1.61, Thread::Queue-3.13, Throwable-1.000, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.30, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Types::Serialiser-1.01, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, URI-5.12, URI::Escape-5.12, Variable::Magic-0.62, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.73
"},{"location":"available_software/detail/Pillow-SIMD/","title":"Pillow-SIMD","text":"Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
https://github.com/uploadcare/pillow-simd
"},{"location":"available_software/detail/Pillow-SIMD/#available-modules","title":"Available modules","text":"The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pillow-SIMD, load one of these modules using a module load
command like:
module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Pillow/","title":"Pillow","text":"Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
https://pillow.readthedocs.org/
"},{"location":"available_software/detail/Pillow/#available-modules","title":"Available modules","text":"The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pillow, load one of these modules using a module load
command like:
module load Pillow/10.2.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow/10.2.0-GCCcore-13.2.0 x x x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Pint/","title":"Pint","text":"Pint is a Python package to define, operate andmanipulate physical quantities: the product of a numerical value and aunit of measurement. It allows arithmetic operations between them andconversions from and to different units.
https://github.com/hgrecco/pint
"},{"location":"available_software/detail/Pint/#available-modules","title":"Available modules","text":"The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pint, load one of these modules using a module load
command like:
module load Pint/0.23-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pint/0.23-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PuLP/","title":"PuLP","text":"PuLP is an LP modeler written in Python. PuLP can generate MPS or LP files andcall GLPK, COIN-OR CLP/CBC, CPLEX, GUROBI, MOSEK, XPRESS, CHOCO, MIPCL, SCIP tosolve linear problems.
https://github.com/coin-or/pulp
"},{"location":"available_software/detail/PuLP/#available-modules","title":"Available modules","text":"The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PuLP, load one of these modules using a module load
command like:
module load PuLP/2.8.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PuLP/2.8.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/","title":"PyQt-builder","text":"PyQt-builder is the PEP 517 compliant build system for PyQt and projects that extend PyQt. It extends the SIP build system and uses Qt\u2019s qmake to perform the actual compilation and installation of extension modules.
http://www.example.com
"},{"location":"available_software/detail/PyQt-builder/#available-modules","title":"Available modules","text":"The overview below shows which PyQt-builder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyQt-builder, load one of these modules using a module load
command like:
module load PyQt-builder/1.15.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt-builder/1.15.4-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/#pyqt-builder1154-gcccore-1230","title":"PyQt-builder/1.15.4-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
PyQt-builder-1.15.4
"},{"location":"available_software/detail/PyQt5/","title":"PyQt5","text":"PyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company.This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company\u2019s Qt WebEngine framework.
https://www.riverbankcomputing.com/software/pyqt
"},{"location":"available_software/detail/PyQt5/#available-modules","title":"Available modules","text":"The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyQt5, load one of these modules using a module load
command like:
module load PyQt5/5.15.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt5/5.15.10-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyTorch/","title":"PyTorch","text":"Tensors and Dynamic neural networks in Python with strong GPU acceleration.PyTorch is a deep learning framework that puts Python first.
https://pytorch.org/
"},{"location":"available_software/detail/PyTorch/#available-modules","title":"Available modules","text":"The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyTorch, load one of these modules using a module load
command like:
module load PyTorch/2.1.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyTorch/2.1.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PyYAML/","title":"PyYAML","text":"PyYAML is a YAML parser and emitter for the Python programming language.
https://github.com/yaml/pyyaml
"},{"location":"available_software/detail/PyYAML/#available-modules","title":"Available modules","text":"The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyYAML, load one of these modules using a module load
command like:
module load PyYAML/6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyYAML/6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyZMQ/","title":"PyZMQ","text":"Python bindings for ZeroMQ
https://www.zeromq.org/bindings:python
"},{"location":"available_software/detail/PyZMQ/#available-modules","title":"Available modules","text":"The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyZMQ, load one of these modules using a module load
command like:
module load PyZMQ/25.1.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/","title":"Python-bundle-PyPI","text":"Bundle of Python packages from PyPI
https://python.org/
"},{"location":"available_software/detail/Python-bundle-PyPI/#available-modules","title":"Available modules","text":"The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Python-bundle-PyPI, load one of these modules using a module load
command like:
module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202310-gcccore-1320","title":"Python-bundle-PyPI/2023.10-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.13.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.6, bitarray-2.8.2, bitstring-4.1.2, blist-1.3.6, cachecontrol-0.13.1, cachy-0.3.0, certifi-2023.7.22, cffi-1.16.0, chardet-5.2.0, charset-normalizer-3.3.1, cleo-2.0.1, click-8.1.7, cloudpickle-3.0.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-3.0.4, decorator-5.1.1, distlib-0.3.7, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.6, ecdsa-0.18.0, editables-0.5, exceptiongroup-1.1.3, execnet-2.0.2, filelock-3.13.0, fsspec-2023.10.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.8.0, importlib_resources-6.1.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.3.0, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.3.2, jsonschema-4.17.3, keyring-24.2.0, keyrings.alt-5.0.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.1.0, more-itertools-10.1.0, msgpack-1.0.7, netaddr-0.9.0, netifaces-0.11.0, packaging-23.2, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.2, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, pluggy-1.3.0, pooch-1.8.0, psutil-5.9.6, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.19.0, pydevtool-0.3.0, Pygments-2.16.1, Pygments-2.16.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.1, pyrsistent-0.20.0, pytest-7.4.3, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3.post1, rapidfuzz-2.15.2, regex-2023.10.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.6.0, rich-click-1.7.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.4, simplegeneric-0.8.1, simplejson-3.19.2, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, sphinx-7.2.6, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib_applehelp-1.0.7, sphinxcontrib_devhelp-1.0.5, sphinxcontrib_htmlhelp-2.0.4, sphinxcontrib_qthelp-1.0.6, sphinxcontrib_serializinghtml-1.1.9, sphinxcontrib_websupport-1.2.6, tabulate-0.9.0, threadpoolctl-3.2.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.12.1, typing_extensions-4.8.0, ujson-5.8.0, urllib3-2.0.7, wcwidth-0.2.8, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.17.0
"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202306-gcccore-1230","title":"Python-bundle-PyPI/2023.06-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.12.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.5, bitstring-4.0.2, blist-1.3.6, CacheControl-0.12.14, cachy-0.3.0, certifi-2023.5.7, cffi-1.15.1, chardet-5.1.0, charset-normalizer-3.1.0, cleo-2.0.1, click-8.1.3, cloudpickle-2.2.1, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-0.29.35, decorator-5.1.1, distlib-0.3.6, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.5, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.1.1, execnet-1.9.0, filelock-3.12.2, fsspec-2023.6.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.7.0, importlib_resources-5.12.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.3, keyring-23.13.1, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.0.2, more-itertools-9.1.0, msgpack-1.0.5, netaddr-0.8.0, netifaces-0.11.0, packaging-23.1, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.1, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, pluggy-1.2.0, pooch-1.7.0, psutil-5.9.5, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.18.0, pydevtool-0.3.0, Pygments-2.15.1, Pygments-2.15.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.0, pyrsistent-0.19.3, pytest-7.4.0, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3, rapidfuzz-2.15.1, regex-2023.6.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.4.2, rich-click-1.6.1, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.0.post1, simplegeneric-0.8.1, simplejson-3.19.1, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-7.0.1, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.4, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.8, typing_extensions-4.6.3, ujson-5.8.0, urllib3-1.26.16, wcwidth-0.2.6, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.15.0
"},{"location":"available_software/detail/Python/","title":"Python","text":"Python is a programming language that lets you work more quickly and integrate your systems more effectively.
https://python.org/
"},{"location":"available_software/detail/Python/#available-modules","title":"Available modules","text":"The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Python, load one of these modules using a module load
command like:
module load Python/3.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python/3.11.5-GCCcore-13.2.0 x x x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x x x"},{"location":"available_software/detail/Python/#python3115-gcccore-1320","title":"Python/3.11.5-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
flit_core-3.9.0, pip-23.2.1, setuptools-68.2.2, wheel-0.41.2
"},{"location":"available_software/detail/Python/#python3113-gcccore-1230","title":"Python/3.11.3-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
flit_core-3.9.0, pip-23.1.2, setuptools-67.7.2, wheel-0.40.0
"},{"location":"available_software/detail/Python/#python3108-gcccore-1220","title":"Python/3.10.8-GCCcore-12.2.0","text":"This is a list of extensions included in the module:
alabaster-0.7.12, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-22.1.0, Babel-2.11.0, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.4, bcrypt-4.0.1, bitstring-3.1.9, blist-1.3.6, CacheControl-0.12.11, cachy-0.3.0, certifi-2022.9.24, cffi-1.15.1, chardet-5.0.0, charset-normalizer-2.1.1, cleo-1.0.0a5, click-8.1.3, clikit-0.6.2, cloudpickle-2.2.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.3.1, cryptography-38.0.3, Cython-0.29.32, decorator-5.1.1, distlib-0.3.6, docopt-0.6.2, docutils-0.19, doit-0.36.0, dulwich-0.20.50, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.0.1, execnet-1.9.0, filelock-3.8.0, flit-3.8.0, flit_core-3.8.0, flit_scm-1.7.0, fsspec-2022.11.0, future-0.18.2, glob2-0.7, hatch_fancy_pypi_readme-22.8.0, hatch_vcs-0.2.0, hatchling-1.11.1, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-5.0.0, importlib_resources-5.10.0, iniconfig-1.1.1, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.0, keyring-23.11.0, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, MarkupSafe-2.1.1, mock-4.0.3, more-itertools-9.0.0, msgpack-1.0.4, netaddr-0.8.0, netifaces-0.11.0, packaging-21.3, paramiko-2.12.0, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.10.1, pbr-5.11.0, pexpect-4.8.0, pip-22.3.1, pkginfo-1.8.3, platformdirs-2.5.3, pluggy-1.0.0, poetry-1.2.2, poetry-core-1.3.2, poetry_plugin_export-1.2.0, pooch-1.6.0, psutil-5.9.4, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.4.8, pycparser-2.21, pycryptodome-3.17, pydevtool-0.3.0, Pygments-2.13.0, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.0.9, pyrsistent-0.19.2, pytest-7.2.0, pytest-xdist-3.1.0, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2022.6, regex-2022.10.31, requests-2.28.1, requests-toolbelt-0.9.1, rich-13.1.0, rich-click-1.6.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, setuptools-63.4.3, setuptools-rust-1.5.2, setuptools_scm-7.0.5, shellingham-1.5.0, simplegeneric-0.8.1, simplejson-3.17.6, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-5.3.0, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.2, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.0, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.6, typing_extensions-4.4.0, ujson-5.5.0, urllib3-1.26.12, virtualenv-20.16.6, wcwidth-0.2.5, webencodings-0.5.1, wheel-0.38.4, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.10.0
"},{"location":"available_software/detail/Qhull/","title":"Qhull","text":"Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull.
http://www.qhull.org
"},{"location":"available_software/detail/Qhull/#available-modules","title":"Available modules","text":"The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Qhull, load one of these modules using a module load
command like:
module load Qhull/2020.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qhull/2020.2-GCCcore-13.2.0 x x x x x x x x Qhull/2020.2-GCCcore-12.3.0 x x x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Qt5/","title":"Qt5","text":"Qt is a comprehensive cross-platform C++ application framework.
https://qt.io/
"},{"location":"available_software/detail/Qt5/#available-modules","title":"Available modules","text":"The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Qt5, load one of these modules using a module load
command like:
module load Qt5/5.15.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qt5/5.15.10-GCCcore-12.3.0 x x x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/QuantumESPRESSO/","title":"QuantumESPRESSO","text":"Quantum ESPRESSO is an integrated suite of computer codesfor electronic-structure calculations and materials modeling at the nanoscale.It is based on density-functional theory, plane waves, and pseudopotentials(both norm-conserving and ultrasoft).
https://www.quantum-espresso.org
"},{"location":"available_software/detail/QuantumESPRESSO/#available-modules","title":"Available modules","text":"The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using QuantumESPRESSO, load one of these modules using a module load
command like:
module load QuantumESPRESSO/7.2-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 QuantumESPRESSO/7.2-foss-2022b x x x x x x x x"},{"location":"available_software/detail/R/","title":"R","text":"R is a free software environment for statistical computing and graphics.
https://www.r-project.org/
"},{"location":"available_software/detail/R/#available-modules","title":"Available modules","text":"The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using R, load one of these modules using a module load
command like:
module load R/4.3.2-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 R/4.3.2-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/R/#r432-gfbf-2023a","title":"R/4.3.2-gfbf-2023a","text":"This is a list of extensions included in the module:
askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-8, brio-1.1.3, bslib-0.5.1, cachem-1.0.8, callr-3.7.3, cli-3.6.1, clipr-0.8.0, commonmark-1.9.0, compiler, cpp11-0.4.6, crayon-1.5.2, credentials-2.0.1, curl-5.1.0, datasets, desc-1.4.2, devtools-2.4.5, diffobj-0.3.5, digest-0.6.33, downlit-0.4.3, ellipsis-0.3.2, evaluate-0.23, fansi-1.0.5, fastmap-1.1.1, fontawesome-0.5.2, fs-1.6.3, gert-2.0.0, gh-1.4.0, gitcreds-0.1.2, glue-1.6.2, graphics, grDevices, grid, highr-0.10, htmltools-0.5.7, htmlwidgets-1.6.2, httpuv-1.6.12, httr-1.4.7, httr2-0.2.3, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.7, knitr-1.45, later-1.3.1, lifecycle-1.0.3, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.1.1, parallel, pillar-1.9.0, pkgbuild-1.4.2, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.3, praise-1.0.0, prettyunits-1.2.0, processx-3.8.2, profvis-0.3.8, promises-1.2.1, ps-1.7.5, purrr-1.0.2, R6-2.5.1, ragg-1.2.6, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.11, rematch2-2.1.2, remotes-2.4.2.1, rlang-1.1.2, rmarkdown-2.25, roxygen2-7.2.3, rprojroot-2.0.4, rstudioapi-0.15.0, rversions-2.1.2, sass-0.4.7, sessioninfo-1.2.2, shiny-1.7.5.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.7.12, stringr-1.5.0, sys-3.4.2, systemfonts-1.0.5, tcltk, testthat-3.2.0, textshaping-0.3.7, tibble-3.2.1, tinytex-0.48, tools, urlchecker-1.0.1, usethis-2.2.2, utf8-1.2.4, utils, vctrs-0.6.4, waldo-0.5.2, whisker-0.4.1, withr-2.5.2, xfun-0.41, xml2-1.3.5, xopen-1.0.0, xtable-1.8-4, yaml-2.3.7, zip-2.3.0
"},{"location":"available_software/detail/RE2/","title":"RE2","text":"RE2 is a fast, safe, thread-friendly alternative to backtracking regularexpression engines like those used in PCRE, Perl, and Python. It is a C++library.
https://github.com/google/re2
"},{"location":"available_software/detail/RE2/#available-modules","title":"Available modules","text":"The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using RE2, load one of these modules using a module load
command like:
module load RE2/2023-08-01-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 RE2/2023-08-01-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/ReFrame/","title":"ReFrame","text":"ReFrame is a framework for writing regression tests for HPC systems.
https://github.com/reframe-hpc/reframe
"},{"location":"available_software/detail/ReFrame/#available-modules","title":"Available modules","text":"The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ReFrame, load one of these modules using a module load
command like:
module load ReFrame/4.3.3\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ReFrame/4.3.3 x x x x x x x x"},{"location":"available_software/detail/ReFrame/#reframe433","title":"ReFrame/4.3.3","text":"This is a list of extensions included in the module:
pip-21.3.1, reframe-4.3.3, wheel-0.37.1
"},{"location":"available_software/detail/Rivet/","title":"Rivet","text":"Rivet toolkit (Robust Independent Validation of Experiment and Theory)To use your own analysis you must append the path to RIVET_ANALYSIS_PATH
.
https://gitlab.com/hepcedar/rivet
"},{"location":"available_software/detail/Rivet/#available-modules","title":"Available modules","text":"The overview below shows which Rivet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Rivet, load one of these modules using a module load
command like:
module load Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6 x x x x x x x x"},{"location":"available_software/detail/Rust/","title":"Rust","text":"Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
https://www.rust-lang.org
"},{"location":"available_software/detail/Rust/#available-modules","title":"Available modules","text":"The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Rust, load one of these modules using a module load
command like:
module load Rust/1.73.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rust/1.73.0-GCCcore-13.2.0 x x x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/SCOTCH/","title":"SCOTCH","text":"Software package and libraries for sequential and parallel graph partitioning,static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
https://www.labri.fr/perso/pelegrin/scotch/
"},{"location":"available_software/detail/SCOTCH/#available-modules","title":"Available modules","text":"The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SCOTCH, load one of these modules using a module load
command like:
module load SCOTCH/7.0.3-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SCOTCH/7.0.3-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/SDL2/","title":"SDL2","text":"SDL: Simple DirectMedia Layer, a cross-platform multimedia library
https://www.libsdl.org/
"},{"location":"available_software/detail/SDL2/#available-modules","title":"Available modules","text":"The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SDL2, load one of these modules using a module load
command like:
module load SDL2/2.28.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SDL2/2.28.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/SIP/","title":"SIP","text":"SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries.
http://www.riverbankcomputing.com/software/sip/
"},{"location":"available_software/detail/SIP/#available-modules","title":"Available modules","text":"The overview below shows which SIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SIP, load one of these modules using a module load
command like:
module load SIP/6.8.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIP/6.8.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/SQLite/","title":"SQLite","text":"SQLite: SQL Database Engine in a C Library
https://www.sqlite.org/
"},{"location":"available_software/detail/SQLite/#available-modules","title":"Available modules","text":"The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SQLite, load one of these modules using a module load
command like:
module load SQLite/3.43.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLite/3.43.1-GCCcore-13.2.0 x x x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ScaFaCoS/","title":"ScaFaCoS","text":"ScaFaCoS is a library of scalable fast coulomb solvers.
http://www.scafacos.de/
"},{"location":"available_software/detail/ScaFaCoS/#available-modules","title":"Available modules","text":"The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ScaFaCoS, load one of these modules using a module load
command like:
module load ScaFaCoS/1.0.4-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaFaCoS/1.0.4-foss-2023a - - - x x x x x"},{"location":"available_software/detail/ScaLAPACK/","title":"ScaLAPACK","text":"The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
https://www.netlib.org/scalapack/
"},{"location":"available_software/detail/ScaLAPACK/#available-modules","title":"Available modules","text":"The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ScaLAPACK, load one of these modules using a module load
command like:
module load ScaLAPACK/2.2.0-gompi-2023b-fb\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x x x"},{"location":"available_software/detail/SciPy-bundle/","title":"SciPy-bundle","text":"Bundle of Python packages for scientific software
https://python.org/
"},{"location":"available_software/detail/SciPy-bundle/#available-modules","title":"Available modules","text":"The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SciPy-bundle, load one of these modules using a module load
command like:
module load SciPy-bundle/2023.11-gfbf-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciPy-bundle/2023.11-gfbf-2023b x x x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x x x"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202311-gfbf-2023b","title":"SciPy-bundle/2023.11-gfbf-2023b","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.1, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.7, numpy-1.26.2, pandas-2.1.3, ply-3.11, pythran-0.14.0, scipy-1.11.4, tzdata-2023.3, versioneer-0.29
"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202307-gfbf-2023a","title":"SciPy-bundle/2023.07-gfbf-2023a","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.0, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.4, numpy-1.25.1, pandas-2.0.3, ply-3.11, pythran-0.13.1, scipy-1.11.1, tzdata-2023.3, versioneer-0.29
"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202302-gfbf-2022b","title":"SciPy-bundle/2023.02-gfbf-2022b","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.5, deap-1.3.3, gast-0.5.3, mpmath-1.2.1, numexpr-2.8.4, numpy-1.24.2, pandas-1.5.3, ply-3.11, pythran-0.12.1, scipy-1.10.1
"},{"location":"available_software/detail/Szip/","title":"Szip","text":"Szip compression software, providing lossless compression of scientific data
https://www.hdfgroup.org/doc_resource/SZIP/
"},{"location":"available_software/detail/Szip/#available-modules","title":"Available modules","text":"The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Szip, load one of these modules using a module load
command like:
module load Szip/2.1.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Szip/2.1.1-GCCcore-13.2.0 x x x x x x x x Szip/2.1.1-GCCcore-12.3.0 x x x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Tcl/","title":"Tcl","text":"Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
https://www.tcl.tk/
"},{"location":"available_software/detail/Tcl/#available-modules","title":"Available modules","text":"The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tcl, load one of these modules using a module load
command like:
module load Tcl/8.6.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tcl/8.6.13-GCCcore-13.2.0 x x x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/TensorFlow/","title":"TensorFlow","text":"An open-source software library for Machine Intelligence
https://www.tensorflow.org/
"},{"location":"available_software/detail/TensorFlow/#available-modules","title":"Available modules","text":"The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using TensorFlow, load one of these modules using a module load
command like:
module load TensorFlow/2.13.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 TensorFlow/2.13.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/TensorFlow/#tensorflow2130-foss-2023a","title":"TensorFlow/2.13.0-foss-2023a","text":"This is a list of extensions included in the module:
absl-py-1.4.0, astor-0.8.1, astunparse-1.6.3, cachetools-5.3.1, google-auth-2.22.0, google-auth-oauthlib-1.0.0, google-pasta-0.2.0, grpcio-1.57.0, gviz-api-1.10.0, keras-2.13.1, Markdown-3.4.4, oauthlib-3.2.2, opt-einsum-3.3.0, portpicker-1.5.2, pyasn1-modules-0.3.0, requests-oauthlib-1.3.1, rsa-4.9, tblib-2.0.0, tensorboard-2.13.0, tensorboard-data-server-0.7.1, tensorboard-plugin-profile-2.13.1, tensorboard-plugin-wit-1.8.1, TensorFlow-2.13.0, tensorflow-estimator-2.13.0, termcolor-2.3.0, Werkzeug-2.3.7, wrapt-1.15.0
"},{"location":"available_software/detail/Tk/","title":"Tk","text":"Tk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
https://www.tcl.tk/
"},{"location":"available_software/detail/Tk/#available-modules","title":"Available modules","text":"The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tk, load one of these modules using a module load
command like:
module load Tk/8.6.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tk/8.6.13-GCCcore-13.2.0 x x x x x x x x Tk/8.6.13-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Tkinter/","title":"Tkinter","text":"Tkinter module, built with the Python buildsystem
https://python.org/
"},{"location":"available_software/detail/Tkinter/#available-modules","title":"Available modules","text":"The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tkinter, load one of these modules using a module load
command like:
module load Tkinter/3.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tkinter/3.11.5-GCCcore-13.2.0 x x x x x x x x Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/UCC-CUDA/","title":"UCC-CUDA","text":"UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.This module adds the UCC CUDA support.
https://www.openucx.org/
"},{"location":"available_software/detail/UCC-CUDA/#available-modules","title":"Available modules","text":"The overview below shows which UCC-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCC-CUDA, load one of these modules using a module load
command like:
module load UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/UCC/","title":"UCC","text":"UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.
https://www.openucx.org/
"},{"location":"available_software/detail/UCC/#available-modules","title":"Available modules","text":"The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCC, load one of these modules using a module load
command like:
module load UCC/1.2.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC/1.2.0-GCCcore-13.2.0 x x x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/UCX-CUDA/","title":"UCX-CUDA","text":"Unified Communication XAn open-source production grade communication framework for data centricand high-performance applicationsThis module adds the UCX CUDA support.
http://www.openucx.org/
"},{"location":"available_software/detail/UCX-CUDA/#available-modules","title":"Available modules","text":"The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCX-CUDA, load one of these modules using a module load
command like:
module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/UCX/","title":"UCX","text":"Unified Communication XAn open-source production grade communication framework for data centricand high-performance applications
https://www.openucx.org/
"},{"location":"available_software/detail/UCX/#available-modules","title":"Available modules","text":"The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCX, load one of these modules using a module load
command like:
module load UCX/1.15.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX/1.15.0-GCCcore-13.2.0 x x x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/UDUNITS/","title":"UDUNITS","text":"UDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
https://www.unidata.ucar.edu/software/udunits/
"},{"location":"available_software/detail/UDUNITS/#available-modules","title":"Available modules","text":"The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UDUNITS, load one of these modules using a module load
command like:
module load UDUNITS/2.2.28-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UDUNITS/2.2.28-GCCcore-13.2.0 x x x x x x x x UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/UnZip/","title":"UnZip","text":"UnZip is an extraction utility for archives compressedin .zip format (also called \"zipfiles\"). Although highly compatible bothwith PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP'sown Zip program, our primary objectives have been portability andnon-MSDOS functionality.
http://www.info-zip.org/UnZip.html
"},{"location":"available_software/detail/UnZip/#available-modules","title":"Available modules","text":"The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UnZip, load one of these modules using a module load
command like:
module load UnZip/6.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UnZip/6.0-GCCcore-13.2.0 x x x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/VTK/","title":"VTK","text":"The Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
https://www.vtk.org
"},{"location":"available_software/detail/VTK/#available-modules","title":"Available modules","text":"The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using VTK, load one of these modules using a module load
command like:
module load VTK/9.3.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 VTK/9.3.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Voro%2B%2B/","title":"Voro++","text":"Voro++ is a software library for carrying out three-dimensional computations of the Voronoitessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations,computing the Voronoi cell for each particle individually. It is particularly well-suited for applications thatrely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be usedto analyze a system of particles.
http://math.lbl.gov/voro++/
"},{"location":"available_software/detail/Voro%2B%2B/#available-modules","title":"Available modules","text":"The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Voro++, load one of these modules using a module load
command like:
module load Voro++/0.4.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Voro++/0.4.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/WCSLIB/","title":"WCSLIB","text":"The FITS \"World Coordinate System\" (WCS) standard defines keywordsand usage that provide for the description of astronomical coordinate systems in aFITS image header.
https://www.atnf.csiro.au/people/mcalabre/WCS/
"},{"location":"available_software/detail/WCSLIB/#available-modules","title":"Available modules","text":"The overview below shows which WCSLIB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WCSLIB, load one of these modules using a module load
command like:
module load WCSLIB/7.11-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WCSLIB/7.11-GCC-13.2.0 x x x x x x x x"},{"location":"available_software/detail/WRF/","title":"WRF","text":"The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.
https://www.wrf-model.org
"},{"location":"available_software/detail/WRF/#available-modules","title":"Available modules","text":"The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WRF, load one of these modules using a module load
command like:
module load WRF/4.4.1-foss-2022b-dmpar\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WRF/4.4.1-foss-2022b-dmpar x x x x x x x x"},{"location":"available_software/detail/WSClean/","title":"WSClean","text":"WSClean (w-stacking clean) is a fast generic widefield imager.It implements several gridding algorithms and offers fully-automated multi-scalemulti-frequency deconvolution.
https://wsclean.readthedocs.io/
"},{"location":"available_software/detail/WSClean/#available-modules","title":"Available modules","text":"The overview below shows which WSClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WSClean, load one of these modules using a module load
command like:
module load WSClean/3.4-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WSClean/3.4-foss-2023b x x x x x x x x"},{"location":"available_software/detail/Wayland/","title":"Wayland","text":"Wayland is a project to define a protocol for a compositor to talk to its clients as well as a library implementation of the protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.
https://wayland.freedesktop.org/
"},{"location":"available_software/detail/Wayland/#available-modules","title":"Available modules","text":"The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Wayland, load one of these modules using a module load
command like:
module load Wayland/1.22.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Wayland/1.22.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/X11/","title":"X11","text":"The X Window System (X11) is a windowing system for bitmap displays
https://www.x.org
"},{"location":"available_software/detail/X11/#available-modules","title":"Available modules","text":"The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using X11, load one of these modules using a module load
command like:
module load X11/20231019-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 X11/20231019-GCCcore-13.2.0 x x x x x x x x X11/20230603-GCCcore-12.3.0 x x x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Xerces-C%2B%2B/","title":"Xerces-C++","text":"Xerces-C++ is a validating XML parser written in a portablesubset of C++. Xerces-C++ makes it easy to give your application the ability toread and write XML data. A shared library is provided for parsing, generating,manipulating, and validating XML documents using the DOM, SAX, and SAX2APIs.
https://xerces.apache.org/xerces-c/
"},{"location":"available_software/detail/Xerces-C%2B%2B/#available-modules","title":"Available modules","text":"The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Xerces-C++, load one of these modules using a module load
command like:
module load Xerces-C++/3.2.4-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/YODA/","title":"YODA","text":"Yet more Objects for (High Energy Physics) Data Analysis
https://yoda.hepforge.org/
"},{"location":"available_software/detail/YODA/#available-modules","title":"Available modules","text":"The overview below shows which YODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using YODA, load one of these modules using a module load
command like:
module load YODA/1.9.9-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 YODA/1.9.9-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Yasm/","title":"Yasm","text":"Yasm: Complete rewrite of the NASM assembler with BSD license
https://www.tortall.net/projects/yasm/
"},{"location":"available_software/detail/Yasm/#available-modules","title":"Available modules","text":"The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Yasm, load one of these modules using a module load
command like:
module load Yasm/1.3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Yasm/1.3.0-GCCcore-12.3.0 - - - x x x x x"},{"location":"available_software/detail/Z3/","title":"Z3","text":"Z3 is a theorem prover from Microsoft Research with support for bitvectors,booleans, arrays, floating point numbers, strings, and other data types. Thismodule includes z3-solver, the Python interface of Z3.
https://github.com/Z3Prover/z3
"},{"location":"available_software/detail/Z3/#available-modules","title":"Available modules","text":"The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Z3, load one of these modules using a module load
command like:
module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x x x"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230-python-3113","title":"Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3","text":"This is a list of extensions included in the module:
z3-solver-4.12.2.0
"},{"location":"available_software/detail/ZeroMQ/","title":"ZeroMQ","text":"ZeroMQ looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems.
https://www.zeromq.org/
"},{"location":"available_software/detail/ZeroMQ/#available-modules","title":"Available modules","text":"The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ZeroMQ, load one of these modules using a module load
command like:
module load ZeroMQ/4.3.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Zip/","title":"Zip","text":"Zip is a compression and file packaging/archive utility.Although highly compatible both with PKWARE's PKZIP and PKUNZIPutilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectiveshave been portability and other-than-MSDOS functionality
http://www.info-zip.org/Zip.html
"},{"location":"available_software/detail/Zip/#available-modules","title":"Available modules","text":"The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Zip, load one of these modules using a module load
command like:
module load Zip/3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Zip/3.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/archspec/","title":"archspec","text":"A library for detecting, labeling, and reasoning about microarchitectures
https://github.com/archspec/archspec
"},{"location":"available_software/detail/archspec/#available-modules","title":"Available modules","text":"The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using archspec, load one of these modules using a module load
command like:
module load archspec/0.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 archspec/0.2.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/arpack-ng/","title":"arpack-ng","text":"ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
https://github.com/opencollab/arpack-ng
"},{"location":"available_software/detail/arpack-ng/#available-modules","title":"Available modules","text":"The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using arpack-ng, load one of these modules using a module load
command like:
module load arpack-ng/3.9.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 arpack-ng/3.9.0-foss-2023b x x x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x x x"},{"location":"available_software/detail/at-spi2-atk/","title":"at-spi2-atk","text":"AT-SPI 2 toolkit bridge
https://wiki.gnome.org/Accessibility
"},{"location":"available_software/detail/at-spi2-atk/#available-modules","title":"Available modules","text":"The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using at-spi2-atk, load one of these modules using a module load
command like:
module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/at-spi2-core/","title":"at-spi2-core","text":"Assistive Technology Service Provider Interface.
https://wiki.gnome.org/Accessibility
"},{"location":"available_software/detail/at-spi2-core/#available-modules","title":"Available modules","text":"The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using at-spi2-core, load one of these modules using a module load
command like:
module load at-spi2-core/2.49.91-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-core/2.49.91-GCCcore-12.3.0 x x x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/bokeh/","title":"bokeh","text":"Statistical and novel interactive HTML plots for Python
https://github.com/bokeh/bokeh
"},{"location":"available_software/detail/bokeh/#available-modules","title":"Available modules","text":"The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using bokeh, load one of these modules using a module load
command like:
module load bokeh/3.2.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 bokeh/3.2.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/bokeh/#bokeh322-foss-2023a","title":"bokeh/3.2.2-foss-2023a","text":"This is a list of extensions included in the module:
bokeh-3.2.2, contourpy-1.0.7, xyzservices-2023.7.0
"},{"location":"available_software/detail/cURL/","title":"cURL","text":"libcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
https://curl.haxx.se
"},{"location":"available_software/detail/cURL/#available-modules","title":"Available modules","text":"The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cURL, load one of these modules using a module load
command like:
module load cURL/8.3.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cURL/8.3.0-GCCcore-13.2.0 x x x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/cairo/","title":"cairo","text":"Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
https://cairographics.org
"},{"location":"available_software/detail/cairo/#available-modules","title":"Available modules","text":"The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cairo, load one of these modules using a module load
command like:
module load cairo/1.17.8-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cairo/1.17.8-GCCcore-12.3.0 x x x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/casacore/","title":"casacore","text":"A suite of C++ libraries for radio astronomy data processing.The ephemerides data needs to be in DATA_DIR and the location must be specified at runtime.Thus user's can update them.
https://github.com/casacore/casacore
"},{"location":"available_software/detail/casacore/#available-modules","title":"Available modules","text":"The overview below shows which casacore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using casacore, load one of these modules using a module load
command like:
module load casacore/3.5.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 casacore/3.5.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/cffi/","title":"cffi","text":"C Foreign Function Interface for Python. Interact with almost any C code fromPython, based on C-like declarations that you can often copy-paste from headerfiles or documentation.
https://cffi.readthedocs.io/en/latest/
"},{"location":"available_software/detail/cffi/#available-modules","title":"Available modules","text":"The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cffi, load one of these modules using a module load
command like:
module load cffi/1.15.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cffi/1.15.1-GCCcore-13.2.0 x x x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1320","title":"cffi/1.15.1-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
cffi-1.15.1, pycparser-2.21
"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1230","title":"cffi/1.15.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
cffi-1.15.1, pycparser-2.21
"},{"location":"available_software/detail/cppy/","title":"cppy","text":"A small C++ header library which makes it easier to writePython extension modules. The primary feature is a PyObject smart pointerwhich automatically handles reference counting and provides conveniencemethods for performing common object operations.
https://github.com/nucleic/cppy
"},{"location":"available_software/detail/cppy/#available-modules","title":"Available modules","text":"The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cppy, load one of these modules using a module load
command like:
module load cppy/1.2.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cppy/1.2.1-GCCcore-13.2.0 x x x x x x x x cppy/1.2.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/cryptography/","title":"cryptography","text":"cryptography is a package designed to expose cryptographic primitives and recipes to Python developers.
https://github.com/pyca/cryptography
"},{"location":"available_software/detail/cryptography/#available-modules","title":"Available modules","text":"The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cryptography, load one of these modules using a module load
command like:
module load cryptography/41.0.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cryptography/41.0.5-GCCcore-13.2.0 x x x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/dask/","title":"dask","text":"Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
https://dask.org/
"},{"location":"available_software/detail/dask/#available-modules","title":"Available modules","text":"The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using dask, load one of these modules using a module load
command like:
module load dask/2023.9.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 dask/2023.9.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/dask/#dask202392-foss-2023a","title":"dask/2023.9.2-foss-2023a","text":"This is a list of extensions included in the module:
dask-2023.9.2, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.9.2, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, zict-3.0.0
"},{"location":"available_software/detail/dill/","title":"dill","text":"dill extends python's pickle module for serializing and de-serializing python objects to the majority of the built-in python types. Serialization is the process of converting an object to a byte stream, and the inverse of which is converting a byte stream back to on python object hierarchy.
https://pypi.org/project/dill/
"},{"location":"available_software/detail/dill/#available-modules","title":"Available modules","text":"The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using dill, load one of these modules using a module load
command like:
module load dill/0.3.7-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 dill/0.3.7-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/double-conversion/","title":"double-conversion","text":"Efficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
https://github.com/google/double-conversion
"},{"location":"available_software/detail/double-conversion/#available-modules","title":"Available modules","text":"The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using double-conversion, load one of these modules using a module load
command like:
module load double-conversion/3.3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ecBuild/","title":"ecBuild","text":"A CMake-based build system, consisting of a collection of CMake macros andfunctions that ease the managing of software build systems
https://ecbuild.readthedocs.io/
"},{"location":"available_software/detail/ecBuild/#available-modules","title":"Available modules","text":"The overview below shows which ecBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ecBuild, load one of these modules using a module load
command like:
module load ecBuild/3.8.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecBuild/3.8.0 x x x x x x x x"},{"location":"available_software/detail/ecCodes/","title":"ecCodes","text":"ecCodes is a package developed by ECMWF which provides an application programming interface and a set of tools for decoding and encoding messages in the following formats: WMO FM-92 GRIB edition 1 and edition 2, WMO FM-94 BUFR edition 3 and edition 4, WMO GTS abbreviated header (only decoding).
https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home
"},{"location":"available_software/detail/ecCodes/#available-modules","title":"Available modules","text":"The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ecCodes, load one of these modules using a module load
command like:
module load ecCodes/2.31.0-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecCodes/2.31.0-gompi-2023b x x x x x x x x ecCodes/2.31.0-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/expat/","title":"expat","text":"Expat is an XML parser library written in C. It is a stream-oriented parserin which an application registers handlers for things the parser might findin the XML document (like start tags).
https://libexpat.github.io
"},{"location":"available_software/detail/expat/#available-modules","title":"Available modules","text":"The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using expat, load one of these modules using a module load
command like:
module load expat/2.5.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 expat/2.5.0-GCCcore-13.2.0 x x x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/expecttest/","title":"expecttest","text":"This library implements expect tests (also known as \"golden\" tests). Expect tests are a method of writing tests where instead of hard-coding the expected output of a test, you run the test to get the output, and the test framework automatically populates the expected output. If the output of the test changes, you can rerun the test with the environment variable EXPECTTEST_ACCEPT=1 to automatically update the expected output.
https://github.com/ezyang/expecttest
"},{"location":"available_software/detail/expecttest/#available-modules","title":"Available modules","text":"The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using expecttest, load one of these modules using a module load
command like:
module load expecttest/0.1.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 expecttest/0.1.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/fastjet-contrib/","title":"fastjet-contrib","text":"3rd party extensions of FastJet
https://fastjet.hepforge.org/contrib/
"},{"location":"available_software/detail/fastjet-contrib/#available-modules","title":"Available modules","text":"The overview below shows which fastjet-contrib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fastjet-contrib, load one of these modules using a module load
command like:
module load fastjet-contrib/1.053-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet-contrib/1.053-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/fastjet/","title":"fastjet","text":"A software package for jet finding in pp and e+e- collisions
https://fastjet.fr/
"},{"location":"available_software/detail/fastjet/#available-modules","title":"Available modules","text":"The overview below shows which fastjet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fastjet, load one of these modules using a module load
command like:
module load fastjet/3.4.2-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet/3.4.2-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/ffnvcodec/","title":"ffnvcodec","text":"FFmpeg nvidia headers. Adds support for nvenc and nvdec. Requires Nvidia GPU and drivers to be present(picked up dynamically).
https://git.videolan.org/?p=ffmpeg/nv-codec-headers.git
"},{"location":"available_software/detail/ffnvcodec/#available-modules","title":"Available modules","text":"The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ffnvcodec, load one of these modules using a module load
command like:
module load ffnvcodec/12.0.16.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ffnvcodec/12.0.16.0 x x x x x x x x"},{"location":"available_software/detail/flatbuffers-python/","title":"flatbuffers-python","text":"Python Flatbuffers runtime library.
https://github.com/google/flatbuffers/
"},{"location":"available_software/detail/flatbuffers-python/#available-modules","title":"Available modules","text":"The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flatbuffers-python, load one of these modules using a module load
command like:
module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flatbuffers/","title":"flatbuffers","text":"FlatBuffers: Memory Efficient Serialization Library
https://github.com/google/flatbuffers/
"},{"location":"available_software/detail/flatbuffers/#available-modules","title":"Available modules","text":"The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flatbuffers, load one of these modules using a module load
command like:
module load flatbuffers/23.5.26-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flit/","title":"flit","text":"A simple packaging tool for simple packages.
https://github.com/pypa/flit
"},{"location":"available_software/detail/flit/#available-modules","title":"Available modules","text":"The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flit, load one of these modules using a module load
command like:
module load flit/3.9.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flit/3.9.0-GCCcore-13.2.0 x x x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flit/#flit390-gcccore-1320","title":"flit/3.9.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
certifi-2023.7.22, charset-normalizer-3.3.1, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.2, requests-2.31.0, setuptools-scm-8.0.4, tomli_w-1.0.0, typing_extensions-4.8.0, urllib3-2.0.7
"},{"location":"available_software/detail/flit/#flit390-gcccore-1230","title":"flit/3.9.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
certifi-2023.5.7, charset-normalizer-3.1.0, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.1, requests-2.31.0, setuptools_scm-7.1.0, tomli_w-1.0.0, typing_extensions-4.6.3, urllib3-1.26.16
"},{"location":"available_software/detail/fontconfig/","title":"fontconfig","text":"Fontconfig is a library designed to provide system-wide font configuration, customization and application access.
https://www.freedesktop.org/wiki/Software/fontconfig/
"},{"location":"available_software/detail/fontconfig/#available-modules","title":"Available modules","text":"The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fontconfig, load one of these modules using a module load
command like:
module load fontconfig/2.14.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fontconfig/2.14.2-GCCcore-13.2.0 x x x x x x x x fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/foss/","title":"foss","text":"GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain
"},{"location":"available_software/detail/foss/#available-modules","title":"Available modules","text":"The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using foss, load one of these modules using a module load
command like:
module load foss/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 foss/2023b x x x x x x x x foss/2023a x x x x x x x x foss/2022b x x x x x x x x"},{"location":"available_software/detail/freetype/","title":"freetype","text":"FreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
https://www.freetype.org
"},{"location":"available_software/detail/freetype/#available-modules","title":"Available modules","text":"The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using freetype, load one of these modules using a module load
command like:
module load freetype/2.13.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 freetype/2.13.2-GCCcore-13.2.0 x x x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/gfbf/","title":"gfbf","text":"GNU Compiler Collection (GCC) based compiler toolchain, including FlexiBLAS (BLAS and LAPACK support) and (serial) FFTW.
(none)
"},{"location":"available_software/detail/gfbf/#available-modules","title":"Available modules","text":"The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gfbf, load one of these modules using a module load
command like:
module load gfbf/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gfbf/2023b x x x x x x x x gfbf/2023a x x x x x x x x gfbf/2022b x x x x x x x x"},{"location":"available_software/detail/giflib/","title":"giflib","text":"giflib is a library for reading and writing gif images.It is API and ABI compatible with libungif which was in wide use whilethe LZW compression algorithm was patented.
http://giflib.sourceforge.net/
"},{"location":"available_software/detail/giflib/#available-modules","title":"Available modules","text":"The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using giflib, load one of these modules using a module load
command like:
module load giflib/5.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 giflib/5.2.1-GCCcore-12.3.0 x x x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/git/","title":"git","text":"Git is a free and open source distributed version control system designedto handle everything from small to very large projects with speed and efficiency.
https://git-scm.com
"},{"location":"available_software/detail/git/#available-modules","title":"Available modules","text":"The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using git, load one of these modules using a module load
command like:
module load git/2.42.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 git/2.42.0-GCCcore-13.2.0 x x x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x x x"},{"location":"available_software/detail/gmpy2/","title":"gmpy2","text":"GMP/MPIR, MPFR, and MPC interface to Python 2.6+ and 3.x
https://github.com/aleaxit/gmpy
"},{"location":"available_software/detail/gmpy2/#available-modules","title":"Available modules","text":"The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gmpy2, load one of these modules using a module load
command like:
module load gmpy2/2.1.5-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmpy2/2.1.5-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/gnuplot/","title":"gnuplot","text":"Portable interactive, function plotting utility
http://gnuplot.sourceforge.net
"},{"location":"available_software/detail/gnuplot/#available-modules","title":"Available modules","text":"The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gnuplot, load one of these modules using a module load
command like:
module load gnuplot/5.4.8-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/gompi/","title":"gompi","text":"GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
(none)
"},{"location":"available_software/detail/gompi/#available-modules","title":"Available modules","text":"The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gompi, load one of these modules using a module load
command like:
module load gompi/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gompi/2023b x x x x x x x x gompi/2023a x x x x x x x x gompi/2022b x x x x x x x x"},{"location":"available_software/detail/googletest/","title":"googletest","text":"Google's framework for writing C++ tests on a variety of platforms
https://github.com/google/googletest
"},{"location":"available_software/detail/googletest/#available-modules","title":"Available modules","text":"The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using googletest, load one of these modules using a module load
command like:
module load googletest/1.14.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 googletest/1.14.0-GCCcore-13.2.0 x x x x x x x x googletest/1.13.0-GCCcore-12.3.0 x x x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/graphite2/","title":"graphite2","text":"Graphite is a \"smart font\" system developed specifically to handle the complexities of lesser-known languages of the world.
https://scripts.sil.org/cms/scripts/page.php?site_id=projects&item_id=graphite_home
"},{"location":"available_software/detail/graphite2/#available-modules","title":"Available modules","text":"The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using graphite2, load one of these modules using a module load
command like:
module load graphite2/1.3.14-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 graphite2/1.3.14-GCCcore-12.3.0 x x x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/groff/","title":"groff","text":"Groff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
https://www.gnu.org/software/groff
"},{"location":"available_software/detail/groff/#available-modules","title":"Available modules","text":"The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using groff, load one of these modules using a module load
command like:
module load groff/1.22.4-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 groff/1.22.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/gzip/","title":"gzip","text":"gzip (GNU zip) is a popular data compression program as a replacement for compress
https://www.gnu.org/software/gzip/
"},{"location":"available_software/detail/gzip/#available-modules","title":"Available modules","text":"The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gzip, load one of these modules using a module load
command like:
module load gzip/1.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gzip/1.13-GCCcore-13.2.0 x x x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/h5py/","title":"h5py","text":"HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
https://www.h5py.org/
"},{"location":"available_software/detail/h5py/#available-modules","title":"Available modules","text":"The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using h5py, load one of these modules using a module load
command like:
module load h5py/3.9.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5py/3.9.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/hatchling/","title":"hatchling","text":"Extensible, standards compliant build backend used by Hatch,a modern, extensible Python project manager.
https://hatch.pypa.io
"},{"location":"available_software/detail/hatchling/#available-modules","title":"Available modules","text":"The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hatchling, load one of these modules using a module load
command like:
module load hatchling/1.18.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatchling/1.18.0-GCCcore-13.2.0 x x x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1320","title":"hatchling/1.18.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
editables-0.5, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, packaging-23.2, pathspec-0.11.2, pluggy-1.3.0, setuptools-scm-8.0.4, tomli-2.0.1, trove_classifiers-2023.10.18, typing_extensions-4.8.0
"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1230","title":"hatchling/1.18.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
editables-0.3, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, packaging-23.1, pathspec-0.11.1, pluggy-1.2.0, setuptools_scm-7.1.0, tomli-2.0.1, trove_classifiers-2023.5.24, typing_extensions-4.6.3
"},{"location":"available_software/detail/hwloc/","title":"hwloc","text":"The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
https://www.open-mpi.org/projects/hwloc/
"},{"location":"available_software/detail/hwloc/#available-modules","title":"Available modules","text":"The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hwloc, load one of these modules using a module load
command like:
module load hwloc/2.9.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hwloc/2.9.2-GCCcore-13.2.0 x x x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/hypothesis/","title":"hypothesis","text":"Hypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
https://github.com/HypothesisWorks/hypothesis
"},{"location":"available_software/detail/hypothesis/#available-modules","title":"Available modules","text":"The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hypothesis, load one of these modules using a module load
command like:
module load hypothesis/6.90.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/jbigkit/","title":"jbigkit","text":"JBIG-KIT is a software implementation of the JBIG1 data compression standard (ITU-T T.82), which was designed for bi-level image data, such as scanned documents.
https://www.cl.cam.ac.uk/~mgk25/jbigkit/
"},{"location":"available_software/detail/jbigkit/#available-modules","title":"Available modules","text":"The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using jbigkit, load one of these modules using a module load
command like:
module load jbigkit/2.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 jbigkit/2.1-GCCcore-13.2.0 x x x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/json-c/","title":"json-c","text":"JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C, output them as JSON formatted strings and parse JSON formatted strings back into the C representation of JSONobjects.
https://github.com/json-c/json-c
"},{"location":"available_software/detail/json-c/#available-modules","title":"Available modules","text":"The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using json-c, load one of these modules using a module load
command like:
module load json-c/0.16-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 json-c/0.16-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/jupyter-server/","title":"jupyter-server","text":"The Jupyter Server provides the backend (i.e. the core services, APIs, and RESTendpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, andVoila.
https://jupyter.org/
"},{"location":"available_software/detail/jupyter-server/#available-modules","title":"Available modules","text":"The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using jupyter-server, load one of these modules using a module load
command like:
module load jupyter-server/2.7.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/jupyter-server/#jupyter-server272-gcccore-1230","title":"jupyter-server/2.7.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
anyio-3.7.1, argon2-cffi-bindings-21.2.0, argon2_cffi-23.1.0, arrow-1.2.3, bleach-6.0.0, comm-0.1.4, debugpy-1.6.7.post1, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.18.0, hatch_jupyter_builder-0.8.3, hatch_nodejs_version-0.3.1, ipykernel-6.25.1, ipython_genutils-0.2.0, ipywidgets-8.1.0, jsonschema-4.18.0, jsonschema_specifications-2023.7.1, jupyter_client-8.3.0, jupyter_core-5.3.1, jupyter_events-0.7.0, jupyter_packaging-0.12.3, jupyter_server-2.7.2, jupyter_server_terminals-0.4.4, jupyterlab_pygments-0.2.2, jupyterlab_widgets-3.0.8, mistune-3.0.1, nbclient-0.8.0, nbconvert-7.7.4, nbformat-5.9.2, nest_asyncio-1.5.7, notebook_shim-0.2.3, overrides-7.4.0, pandocfilters-1.5.0, prometheus_client-0.17.1, python-json-logger-2.0.7, referencing-0.30.2, rfc3339_validator-0.1.4, rfc3986_validator-0.1.1, rpds_py-0.9.2, Send2Trash-1.8.2, sniffio-1.3.0, terminado-0.17.1, tinycss2-1.2.1, websocket-client-1.6.1, widgetsnbextension-4.0.8
"},{"location":"available_software/detail/kim-api/","title":"kim-api","text":"Open Knowledgebase of Interatomic Models.KIM is an API and OpenKIM is a collection of interatomic models (potentials) foratomistic simulations. This is a library that can be used by simulation programsto get access to the models in the OpenKIM database.This EasyBuild only installs the API, the models can be installed with thepackage openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAMEor kim-api-collections-management install user OpenKIMto install them all.
https://openkim.org/
"},{"location":"available_software/detail/kim-api/#available-modules","title":"Available modules","text":"The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using kim-api, load one of these modules using a module load
command like:
module load kim-api/2.3.0-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 kim-api/2.3.0-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libGLU/","title":"libGLU","text":"The OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
https://mesa.freedesktop.org/archive/glu/
"},{"location":"available_software/detail/libGLU/#available-modules","title":"Available modules","text":"The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libGLU, load one of these modules using a module load
command like:
module load libGLU/9.0.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libGLU/9.0.3-GCCcore-12.3.0 x x x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libaec/","title":"libaec","text":"Libaec provides fast lossless compression of 1 up to 32 bit wide signed or unsigned integers(samples). The library achieves best results for low entropy data as often encountered in space imaginginstrument data or numerical model output from weather or climate simulations. While floating point representationsare not directly supported, they can also be efficiently coded by grouping exponents and mantissa.
https://gitlab.dkrz.de/k202009/libaec
"},{"location":"available_software/detail/libaec/#available-modules","title":"Available modules","text":"The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libaec, load one of these modules using a module load
command like:
module load libaec/1.0.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaec/1.0.6-GCCcore-13.2.0 x x x x x x x x libaec/1.0.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libarchive/","title":"libarchive","text":"Multi-format archive and compression library
https://www.libarchive.org/
"},{"location":"available_software/detail/libarchive/#available-modules","title":"Available modules","text":"The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libarchive, load one of these modules using a module load
command like:
module load libarchive/3.7.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libarchive/3.7.2-GCCcore-13.2.0 x x x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libcerf/","title":"libcerf","text":"libcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
https://jugit.fz-juelich.de/mlz/libcerf
"},{"location":"available_software/detail/libcerf/#available-modules","title":"Available modules","text":"The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libcerf, load one of these modules using a module load
command like:
module load libcerf/2.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcerf/2.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libdeflate/","title":"libdeflate","text":"Heavily optimized library for DEFLATE/zlib/gzip compression and decompression.
https://github.com/ebiggers/libdeflate
"},{"location":"available_software/detail/libdeflate/#available-modules","title":"Available modules","text":"The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libdeflate, load one of these modules using a module load
command like:
module load libdeflate/1.19-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdeflate/1.19-GCCcore-13.2.0 x x x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libdrm/","title":"libdrm","text":"Direct Rendering Manager runtime library.
https://dri.freedesktop.org
"},{"location":"available_software/detail/libdrm/#available-modules","title":"Available modules","text":"The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libdrm, load one of these modules using a module load
command like:
module load libdrm/2.4.115-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdrm/2.4.115-GCCcore-12.3.0 x x x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libepoxy/","title":"libepoxy","text":"Epoxy is a library for handling OpenGL function pointer management for you
https://github.com/anholt/libepoxy
"},{"location":"available_software/detail/libepoxy/#available-modules","title":"Available modules","text":"The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libepoxy, load one of these modules using a module load
command like:
module load libepoxy/1.5.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libevent/","title":"libevent","text":"The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
https://libevent.org/
"},{"location":"available_software/detail/libevent/#available-modules","title":"Available modules","text":"The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libevent, load one of these modules using a module load
command like:
module load libevent/2.1.12-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libevent/2.1.12-GCCcore-13.2.0 x x x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libfabric/","title":"libfabric","text":"Libfabric is a core component of OFI. It is the library that defines and exportsthe user-space API of OFI, and is typically the only software that applicationsdeal with directly. It works in conjunction with provider libraries, which areoften integrated directly into libfabric.
https://ofiwg.github.io/libfabric/
"},{"location":"available_software/detail/libfabric/#available-modules","title":"Available modules","text":"The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libfabric, load one of these modules using a module load
command like:
module load libfabric/1.19.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libfabric/1.19.0-GCCcore-13.2.0 x x x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libffi/","title":"libffi","text":"The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
https://sourceware.org/libffi/
"},{"location":"available_software/detail/libffi/#available-modules","title":"Available modules","text":"The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libffi, load one of these modules using a module load
command like:
module load libffi/3.4.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libffi/3.4.4-GCCcore-13.2.0 x x x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libgd/","title":"libgd","text":"GD is an open source code library for the dynamic creation of images by programmers.
https://libgd.github.io
"},{"location":"available_software/detail/libgd/#available-modules","title":"Available modules","text":"The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgd, load one of these modules using a module load
command like:
module load libgd/2.3.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgd/2.3.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libgeotiff/","title":"libgeotiff","text":"Library for reading and writing coordinate system information from/to GeoTIFF files
https://directory.fsf.org/wiki/Libgeotiff
"},{"location":"available_software/detail/libgeotiff/#available-modules","title":"Available modules","text":"The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgeotiff, load one of these modules using a module load
command like:
module load libgeotiff/1.7.1-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libgit2/","title":"libgit2","text":"libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrantlinkable library with a solid API, allowing you to write native speed custom Git applications in any languagewhich supports C bindings.
https://libgit2.org/
"},{"location":"available_software/detail/libgit2/#available-modules","title":"Available modules","text":"The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgit2, load one of these modules using a module load
command like:
module load libgit2/1.7.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgit2/1.7.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libglvnd/","title":"libglvnd","text":"libglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
https://gitlab.freedesktop.org/glvnd/libglvnd
"},{"location":"available_software/detail/libglvnd/#available-modules","title":"Available modules","text":"The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libglvnd, load one of these modules using a module load
command like:
module load libglvnd/1.6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libiconv/","title":"libiconv","text":"Libiconv converts from one character encoding to another through Unicode conversion
https://www.gnu.org/software/libiconv
"},{"location":"available_software/detail/libiconv/#available-modules","title":"Available modules","text":"The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libiconv, load one of these modules using a module load
command like:
module load libiconv/1.17-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libiconv/1.17-GCCcore-13.2.0 x x x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libidn2/","title":"libidn2","text":"Libidn2 implements the revised algorithm for internationalized domain names called IDNA2008/TR46.
http://www.gnu.org/software/libidn2
"},{"location":"available_software/detail/libidn2/#available-modules","title":"Available modules","text":"The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libidn2, load one of these modules using a module load
command like:
module load libidn2/2.3.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libidn2/2.3.2-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/libjpeg-turbo/","title":"libjpeg-turbo","text":"libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
https://sourceforge.net/projects/libjpeg-turbo/
"},{"location":"available_software/detail/libjpeg-turbo/#available-modules","title":"Available modules","text":"The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libjpeg-turbo, load one of these modules using a module load
command like:
module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libpciaccess/","title":"libpciaccess","text":"Generic PCI access library.
https://cgit.freedesktop.org/xorg/lib/libpciaccess/
"},{"location":"available_software/detail/libpciaccess/#available-modules","title":"Available modules","text":"The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libpciaccess, load one of these modules using a module load
command like:
module load libpciaccess/0.17-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpciaccess/0.17-GCCcore-13.2.0 x x x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libpng/","title":"libpng","text":"libpng is the official PNG reference library
http://www.libpng.org/pub/png/libpng.html
"},{"location":"available_software/detail/libpng/#available-modules","title":"Available modules","text":"The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libpng, load one of these modules using a module load
command like:
module load libpng/1.6.40-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpng/1.6.40-GCCcore-13.2.0 x x x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libsodium/","title":"libsodium","text":"Sodium is a modern, easy-to-use software library for encryption, decryption, signatures, password hashing and more.
https://doc.libsodium.org/
"},{"location":"available_software/detail/libsodium/#available-modules","title":"Available modules","text":"The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libsodium, load one of these modules using a module load
command like:
module load libsodium/1.0.18-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsodium/1.0.18-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libtirpc/","title":"libtirpc","text":"Libtirpc is a port of Suns Transport-Independent RPC library to Linux.
https://sourceforge.net/projects/libtirpc/
"},{"location":"available_software/detail/libtirpc/#available-modules","title":"Available modules","text":"The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libtirpc, load one of these modules using a module load
command like:
module load libtirpc/1.3.3-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libunwind/","title":"libunwind","text":"The primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
https://www.nongnu.org/libunwind/
"},{"location":"available_software/detail/libunwind/#available-modules","title":"Available modules","text":"The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libunwind, load one of these modules using a module load
command like:
module load libunwind/1.6.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libunwind/1.6.2-GCCcore-12.3.0 x x x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libwebp/","title":"libwebp","text":"WebP is a modern image format that provides superiorlossless and lossy compression for images on the web. Using WebP,webmasters and web developers can create smaller, richer images thatmake the web faster.
https://developers.google.com/speed/webp/
"},{"location":"available_software/detail/libwebp/#available-modules","title":"Available modules","text":"The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libwebp, load one of these modules using a module load
command like:
module load libwebp/1.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libwebp/1.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libxc/","title":"libxc","text":"Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
https://www.tddft.org/programs/libxc
"},{"location":"available_software/detail/libxc/#available-modules","title":"Available modules","text":"The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxc, load one of these modules using a module load
command like:
module load libxc/6.1.0-GCC-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxc/6.1.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libxml2/","title":"libxml2","text":"Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
http://xmlsoft.org/
"},{"location":"available_software/detail/libxml2/#available-modules","title":"Available modules","text":"The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxml2, load one of these modules using a module load
command like:
module load libxml2/2.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2/2.11.5-GCCcore-13.2.0 x x x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libxslt/","title":"libxslt","text":"Libxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
http://xmlsoft.org/
"},{"location":"available_software/detail/libxslt/#available-modules","title":"Available modules","text":"The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxslt, load one of these modules using a module load
command like:
module load libxslt/1.1.38-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxslt/1.1.38-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libyaml/","title":"libyaml","text":"LibYAML is a YAML parser and emitter written in C.
https://pyyaml.org/wiki/LibYAML
"},{"location":"available_software/detail/libyaml/#available-modules","title":"Available modules","text":"The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libyaml, load one of these modules using a module load
command like:
module load libyaml/0.2.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libyaml/0.2.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/lxml/","title":"lxml","text":"The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
https://lxml.de/
"},{"location":"available_software/detail/lxml/#available-modules","title":"Available modules","text":"The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using lxml, load one of these modules using a module load
command like:
module load lxml/4.9.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 lxml/4.9.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/lz4/","title":"lz4","text":"LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
https://lz4.github.io/lz4/
"},{"location":"available_software/detail/lz4/#available-modules","title":"Available modules","text":"The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using lz4, load one of these modules using a module load
command like:
module load lz4/1.9.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 lz4/1.9.4-GCCcore-13.2.0 x x x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/make/","title":"make","text":"GNU version of make utility
https://www.gnu.org/software/make/make.html
"},{"location":"available_software/detail/make/#available-modules","title":"Available modules","text":"The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using make, load one of these modules using a module load
command like:
module load make/4.4.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 make/4.4.1-GCCcore-13.2.0 x x x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x x x make/4.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/matplotlib/","title":"matplotlib","text":"matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
https://matplotlib.org
"},{"location":"available_software/detail/matplotlib/#available-modules","title":"Available modules","text":"The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using matplotlib, load one of these modules using a module load
command like:
module load matplotlib/3.8.2-gfbf-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 matplotlib/3.8.2-gfbf-2023b x x x x x x x x matplotlib/3.7.2-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/matplotlib/#matplotlib382-gfbf-2023b","title":"matplotlib/3.8.2-gfbf-2023b","text":"This is a list of extensions included in the module:
contourpy-1.2.0, Cycler-0.12.1, fonttools-4.47.0, kiwisolver-1.4.5, matplotlib-3.8.2
"},{"location":"available_software/detail/matplotlib/#matplotlib372-gfbf-2023a","title":"matplotlib/3.7.2-gfbf-2023a","text":"This is a list of extensions included in the module:
contourpy-1.1.0, Cycler-0.11.0, fonttools-4.42.0, kiwisolver-1.4.4, matplotlib-3.7.2
"},{"location":"available_software/detail/maturin/","title":"maturin","text":"This project is meant as a zero configurationreplacement for setuptools-rust and milksnake. It supports buildingwheels for python 3.5+ on windows, linux, mac and freebsd, can uploadthem to pypi and has basic pypy and graalpy support.
https://github.com/pyo3/maturin
"},{"location":"available_software/detail/maturin/#available-modules","title":"Available modules","text":"The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using maturin, load one of these modules using a module load
command like:
module load maturin/1.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 maturin/1.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/meson-python/","title":"meson-python","text":"Python build backend (PEP 517) for Meson projects
https://github.com/mesonbuild/meson-python
"},{"location":"available_software/detail/meson-python/#available-modules","title":"Available modules","text":"The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using meson-python, load one of these modules using a module load
command like:
module load meson-python/0.15.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 meson-python/0.15.0-GCCcore-13.2.0 x x x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1320","title":"meson-python/0.15.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
meson-python-0.15.0, pyproject-metadata-0.7.1
"},{"location":"available_software/detail/meson-python/#meson-python0132-gcccore-1230","title":"meson-python/0.13.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
meson-python-0.13.2, pyproject-metadata-0.7.1
"},{"location":"available_software/detail/mpi4py/","title":"mpi4py","text":"MPI for Python (mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
https://github.com/mpi4py/mpi4py
"},{"location":"available_software/detail/mpi4py/#available-modules","title":"Available modules","text":"The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using mpi4py, load one of these modules using a module load
command like:
module load mpi4py/3.1.4-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpi4py/3.1.4-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2023a","title":"mpi4py/3.1.4-gompi-2023a","text":"This is a list of extensions included in the module:
mpi4py-3.1.4
"},{"location":"available_software/detail/netCDF-Fortran/","title":"netCDF-Fortran","text":"NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
https://www.unidata.ucar.edu/software/netcdf/
"},{"location":"available_software/detail/netCDF-Fortran/#available-modules","title":"Available modules","text":"The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using netCDF-Fortran, load one of these modules using a module load
command like:
module load netCDF-Fortran/4.6.0-gompi-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF-Fortran/4.6.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/netCDF/","title":"netCDF","text":"NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
https://www.unidata.ucar.edu/software/netcdf/
"},{"location":"available_software/detail/netCDF/#available-modules","title":"Available modules","text":"The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using netCDF, load one of these modules using a module load
command like:
module load netCDF/4.9.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF/4.9.2-gompi-2023b x x x x x x x x netCDF/4.9.2-gompi-2023a x x x x x x x x netCDF/4.9.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/networkx/","title":"networkx","text":"NetworkX is a Python package for the creation, manipulation,and study of the structure, dynamics, and functions of complex networks.
https://pypi.python.org/pypi/networkx
"},{"location":"available_software/detail/networkx/#available-modules","title":"Available modules","text":"The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using networkx, load one of these modules using a module load
command like:
module load networkx/3.1-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 networkx/3.1-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/nlohmann_json/","title":"nlohmann_json","text":"JSON for Modern C++
https://github.com/nlohmann/json
"},{"location":"available_software/detail/nlohmann_json/#available-modules","title":"Available modules","text":"The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nlohmann_json, load one of these modules using a module load
command like:
module load nlohmann_json/3.11.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nlohmann_json/3.11.3-GCCcore-13.2.0 x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/nodejs/","title":"nodejs","text":"Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
https://nodejs.org
"},{"location":"available_software/detail/nodejs/#available-modules","title":"Available modules","text":"The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nodejs, load one of these modules using a module load
command like:
module load nodejs/18.17.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nodejs/18.17.1-GCCcore-12.3.0 x x x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/nsync/","title":"nsync","text":"nsync is a C library that exports various synchronization primitives, such as mutexes
https://github.com/google/nsync
"},{"location":"available_software/detail/nsync/#available-modules","title":"Available modules","text":"The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nsync, load one of these modules using a module load
command like:
module load nsync/1.26.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nsync/1.26.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/numactl/","title":"numactl","text":"The numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
https://github.com/numactl/numactl
"},{"location":"available_software/detail/numactl/#available-modules","title":"Available modules","text":"The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using numactl, load one of these modules using a module load
command like:
module load numactl/2.0.16-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 numactl/2.0.16-GCCcore-13.2.0 x x x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/patchelf/","title":"patchelf","text":"PatchELF is a small utility to modify the dynamic linker and RPATH of ELF executables.
https://github.com/NixOS/patchelf
"},{"location":"available_software/detail/patchelf/#available-modules","title":"Available modules","text":"The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using patchelf, load one of these modules using a module load
command like:
module load patchelf/0.18.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 patchelf/0.18.0-GCCcore-13.2.0 x x x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pixman/","title":"pixman","text":"Pixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
http://www.pixman.org/
"},{"location":"available_software/detail/pixman/#available-modules","title":"Available modules","text":"The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pixman, load one of these modules using a module load
command like:
module load pixman/0.42.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pixman/0.42.2-GCCcore-12.3.0 x x x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/pkgconf/","title":"pkgconf","text":"pkgconf is a program which helps to configure compiler and linker flags for development libraries. It is similar to pkg-config from freedesktop.org.
https://github.com/pkgconf/pkgconf
"},{"location":"available_software/detail/pkgconf/#available-modules","title":"Available modules","text":"The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pkgconf, load one of these modules using a module load
command like:
module load pkgconf/2.0.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x x x pkgconf/1.8.0 x x x x x x x x"},{"location":"available_software/detail/pkgconfig/","title":"pkgconfig","text":"pkgconfig is a Python module to interface with the pkg-config command line tool
https://github.com/matze/pkgconfig
"},{"location":"available_software/detail/pkgconfig/#available-modules","title":"Available modules","text":"The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pkgconfig, load one of these modules using a module load
command like:
module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x x x"},{"location":"available_software/detail/poetry/","title":"poetry","text":"Python packaging and dependency management made easy. Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.
https://python-poetry.org
"},{"location":"available_software/detail/poetry/#available-modules","title":"Available modules","text":"The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using poetry, load one of these modules using a module load
command like:
module load poetry/1.6.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 poetry/1.6.1-GCCcore-13.2.0 x x x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/poetry/#poetry161-gcccore-1320","title":"poetry/1.6.1-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
attrs-23.1.0, build-0.10.0, cachecontrol-0.13.1, certifi-2023.7.22, charset-normalizer-3.3.1, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.6, html5lib-1.1, idna-3.4, importlib_metadata-6.8.0, installer-0.7.0, jaraco.classes-3.3.0, jeepney-0.8.0, jsonschema-4.17.3, keyring-24.2.0, lockfile-0.12.2, more-itertools-10.1.0, msgpack-1.0.7, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, poetry-1.6.1, poetry_core-1.7.0, poetry_plugin_export-1.5.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.20.0, rapidfuzz-2.15.2, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.4, six-1.16.0, tomlkit-0.12.1, urllib3-2.0.7, webencodings-0.5.1, zipp-3.17.0
"},{"location":"available_software/detail/poetry/#poetry151-gcccore-1230","title":"poetry/1.5.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
attrs-23.1.0, build-0.10.0, CacheControl-0.12.14, certifi-2023.5.7, charset-normalizer-3.1.0, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.5, html5lib-1.1, idna-3.4, importlib_metadata-6.7.0, installer-0.7.0, jaraco.classes-3.2.3, jeepney-0.8.0, jsonschema-4.17.3, keyring-23.13.1, lockfile-0.12.2, more-itertools-9.1.0, msgpack-1.0.5, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, poetry-1.5.1, poetry_core-1.6.1, poetry_plugin_export-1.4.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.19.3, rapidfuzz-2.15.1, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.0, six-1.16.0, tomlkit-0.11.8, urllib3-1.26.16, webencodings-0.5.1, zipp-3.15.0
"},{"location":"available_software/detail/protobuf-python/","title":"protobuf-python","text":"Python Protocol Buffers runtime library.
https://github.com/google/protobuf/
"},{"location":"available_software/detail/protobuf-python/#available-modules","title":"Available modules","text":"The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using protobuf-python, load one of these modules using a module load
command like:
module load protobuf-python/4.24.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/protobuf/","title":"protobuf","text":"Protocol Buffers (a.k.a., protobuf) are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data.
https://github.com/protocolbuffers/protobuf
"},{"location":"available_software/detail/protobuf/#available-modules","title":"Available modules","text":"The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using protobuf, load one of these modules using a module load
command like:
module load protobuf/24.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf/24.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pybind11/","title":"pybind11","text":"pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
https://pybind11.readthedocs.io
"},{"location":"available_software/detail/pybind11/#available-modules","title":"Available modules","text":"The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pybind11, load one of these modules using a module load
command like:
module load pybind11/2.11.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pybind11/2.11.1-GCCcore-13.2.0 x x x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/pytest-flakefinder/","title":"pytest-flakefinder","text":"Runs tests multiple times to expose flakiness.
https://github.com/dropbox/pytest-flakefinder
"},{"location":"available_software/detail/pytest-flakefinder/#available-modules","title":"Available modules","text":"The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-flakefinder, load one of these modules using a module load
command like:
module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pytest-rerunfailures/","title":"pytest-rerunfailures","text":"pytest plugin to re-run tests to eliminate flaky failures.
https://github.com/pytest-dev/pytest-rerunfailures
"},{"location":"available_software/detail/pytest-rerunfailures/#available-modules","title":"Available modules","text":"The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-rerunfailures, load one of these modules using a module load
command like:
module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pytest-shard/","title":"pytest-shard","text":"pytest plugin to support parallelism across multiple machines.Shards tests based on a hash of their test name enabling easy parallelism across machines,suitable for a wide variety of continuous integration services.Tests are split at the finest level of granularity, individual test cases,enabling parallelism even if all of your tests are in a single file(or even single parameterized test method).
https://github.com/AdamGleave/pytest-shard
"},{"location":"available_software/detail/pytest-shard/#available-modules","title":"Available modules","text":"The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-shard, load one of these modules using a module load
command like:
module load pytest-shard/0.1.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/re2c/","title":"re2c","text":"re2c is a free and open-source lexer generator for C and C++. Its main goal is generatingfast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of usingtraditional table-driven approach, re2c encodes the generated finite state automata directly in the formof conditional jumps and comparisons.
https://re2c.org
"},{"location":"available_software/detail/re2c/#available-modules","title":"Available modules","text":"The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using re2c, load one of these modules using a module load
command like:
module load re2c/3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 re2c/3.1-GCCcore-12.3.0 x x x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/scikit-build/","title":"scikit-build","text":"Scikit-Build, or skbuild, is an improved build system generatorfor CPython C/C++/Fortran/Cython extensions.
https://scikit-build.readthedocs.io/en/latest
"},{"location":"available_software/detail/scikit-build/#available-modules","title":"Available modules","text":"The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using scikit-build, load one of these modules using a module load
command like:
module load scikit-build/0.17.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1320","title":"scikit-build/0.17.6-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
distro-1.8.0, packaging-23.1, scikit_build-0.17.6
"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1230","title":"scikit-build/0.17.6-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
distro-1.8.0, packaging-23.1, scikit_build-0.17.6
"},{"location":"available_software/detail/scikit-learn/","title":"scikit-learn","text":"Scikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world,building upon numpy, scipy, and matplotlib. As a machine-learning module,it provides versatile tools for data mining and analysis in any field of science and engineering.It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
https://scikit-learn.org/stable/index.html
"},{"location":"available_software/detail/scikit-learn/#available-modules","title":"Available modules","text":"The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using scikit-learn, load one of these modules using a module load
command like:
module load scikit-learn/1.3.1-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-learn/1.3.1-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/scikit-learn/#scikit-learn131-gfbf-2023a","title":"scikit-learn/1.3.1-gfbf-2023a","text":"This is a list of extensions included in the module:
scikit-learn-1.3.1, sklearn-0.0
"},{"location":"available_software/detail/setuptools-rust/","title":"setuptools-rust","text":"setuptools-rust is a plugin for setuptools to build Rust Python extensionsimplemented with PyO3 or rust-cpython.
https://github.com/PyO3/setuptools-rust
"},{"location":"available_software/detail/setuptools-rust/#available-modules","title":"Available modules","text":"The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using setuptools-rust, load one of these modules using a module load
command like:
module load setuptools-rust/1.8.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust180-gcccore-1320","title":"setuptools-rust/1.8.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
semantic_version-2.10.0, setuptools-rust-1.8.0, typing_extensions-4.8.0
"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust160-gcccore-1230","title":"setuptools-rust/1.6.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
semantic_version-2.10.0, setuptools-rust-1.6.0, typing_extensions-4.6.3
"},{"location":"available_software/detail/siscone/","title":"siscone","text":"Hadron Seedless Infrared-Safe Cone jet algorithm
https://siscone.hepforge.org/
"},{"location":"available_software/detail/siscone/#available-modules","title":"Available modules","text":"The overview below shows which siscone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using siscone, load one of these modules using a module load
command like:
module load siscone/3.0.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 siscone/3.0.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/snakemake/","title":"snakemake","text":"The Snakemake workflow management system is a tool to create reproducible and scalable data analyses.
https://snakemake.readthedocs.io
"},{"location":"available_software/detail/snakemake/#available-modules","title":"Available modules","text":"The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using snakemake, load one of these modules using a module load
command like:
module load snakemake/8.4.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 snakemake/8.4.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/snakemake/#snakemake842-foss-2023a","title":"snakemake/8.4.2-foss-2023a","text":"This is a list of extensions included in the module:
argparse-dataclass-2.0.0, conda-inject-1.3.1, ConfigArgParse-1.7, connection-pool-0.0.3, datrie-0.8.2, dpath-2.1.6, fastjsonschema-2.19.1, humanfriendly-10.0, immutables-0.20, jupyter-core-5.7.1, nbformat-5.9.2, plac-1.4.2, reretry-0.11.8, smart-open-6.4.0, snakemake-8.4.2, snakemake-executor-plugin-cluster-generic-1.0.7, snakemake-executor-plugin-cluster-sync-0.1.3, snakemake-executor-plugin-flux-0.1.0, snakemake-executor-plugin-slurm-0.2.1, snakemake-executor-plugin-slurm-jobstep-0.1.10, snakemake-interface-common-1.15.2, snakemake-interface-executor-plugins-8.2.0, snakemake-interface-storage-plugins-3.0.0, stopit-1.1.2, throttler-1.2.2, toposort-1.10, yte-1.5.4
"},{"location":"available_software/detail/snappy/","title":"snappy","text":"Snappy is a compression/decompression library. It does not aimfor maximum compression, or compatibility with any other compression library;instead, it aims for very high speeds and reasonable compression.
https://github.com/google/snappy
"},{"location":"available_software/detail/snappy/#available-modules","title":"Available modules","text":"The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using snappy, load one of these modules using a module load
command like:
module load snappy/1.1.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 snappy/1.1.10-GCCcore-12.3.0 x x x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/sympy/","title":"sympy","text":"SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries.
https://sympy.org/
"},{"location":"available_software/detail/sympy/#available-modules","title":"Available modules","text":"The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using sympy, load one of these modules using a module load
command like:
module load sympy/1.12-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 sympy/1.12-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/tbb/","title":"tbb","text":"Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
https://github.com/oneapi-src/oneTBB
"},{"location":"available_software/detail/tbb/#available-modules","title":"Available modules","text":"The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tbb, load one of these modules using a module load
command like:
module load tbb/2021.11.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tbb/2021.11.0-GCCcore-12.3.0 - - - x x x x x"},{"location":"available_software/detail/tcsh/","title":"tcsh","text":"Tcsh is an enhanced, but completely compatible version of the Berkeley UNIX C shell (csh). It is a command language interpreter usable both as an interactive login shell and a shell script command processor. It includes a command-line editor, programmable word completion, spelling correction, a history mechanism, job control and a C-like syntax.
https://www.tcsh.org
"},{"location":"available_software/detail/tcsh/#available-modules","title":"Available modules","text":"The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tcsh, load one of these modules using a module load
command like:
module load tcsh/6.24.07-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tcsh/6.24.07-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/time/","title":"time","text":"The `time' command runs another program, then displays information about the resources used by that program, collected by the system while the program was running.
https://www.gnu.org/software/time/
"},{"location":"available_software/detail/time/#available-modules","title":"Available modules","text":"The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using time, load one of these modules using a module load
command like:
module load time/1.9-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 time/1.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/tornado/","title":"tornado","text":"Tornado is a Python web framework and asynchronous networking library.
https://github.com/tornadoweb/tornado
"},{"location":"available_software/detail/tornado/#available-modules","title":"Available modules","text":"The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tornado, load one of these modules using a module load
command like:
module load tornado/6.3.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tornado/6.3.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/typing-extensions/","title":"typing-extensions","text":"Typing Extensions \u2013 Backported and Experimental Type Hints for Python
https://github.com/python/typing/blob/master/typing_extensions/README.rst
"},{"location":"available_software/detail/typing-extensions/#available-modules","title":"Available modules","text":"The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using typing-extensions, load one of these modules using a module load
command like:
module load typing-extensions/4.9.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/virtualenv/","title":"virtualenv","text":"A tool for creating isolated virtual python environments.
https://github.com/pypa/virtualenv
"},{"location":"available_software/detail/virtualenv/#available-modules","title":"Available modules","text":"The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using virtualenv, load one of these modules using a module load
command like:
module load virtualenv/20.24.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/virtualenv/#virtualenv20246-gcccore-1320","title":"virtualenv/20.24.6-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
distlib-0.3.7, filelock-3.13.0, platformdirs-3.11.0, virtualenv-20.24.6
"},{"location":"available_software/detail/virtualenv/#virtualenv20231-gcccore-1230","title":"virtualenv/20.23.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
distlib-0.3.6, filelock-3.12.2, platformdirs-3.8.0, virtualenv-20.23.1
"},{"location":"available_software/detail/waLBerla/","title":"waLBerla","text":"Widely applicable Lattics-Boltzmann from Erlangen is a block-structured high-performance framework for multiphysics simulations
https://walberla.net/index.html
"},{"location":"available_software/detail/waLBerla/#available-modules","title":"Available modules","text":"The overview below shows which waLBerla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using waLBerla, load one of these modules using a module load
command like:
module load waLBerla/6.1-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 waLBerla/6.1-foss-2022b x x x x x x x x"},{"location":"available_software/detail/wget/","title":"wget","text":"GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.
https://www.gnu.org/software/wget
"},{"location":"available_software/detail/wget/#available-modules","title":"Available modules","text":"The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using wget, load one of these modules using a module load
command like:
module load wget/1.21.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 wget/1.21.4-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/wrapt/","title":"wrapt","text":"The aim of the wrapt module is to provide a transparent objectproxy for Python, which can be used as the basis for the construction offunction wrappers and decorator functions.
https://pypi.org/project/wrapt/
"},{"location":"available_software/detail/wrapt/#available-modules","title":"Available modules","text":"The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using wrapt, load one of these modules using a module load
command like:
module load wrapt/1.15.0-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 wrapt/1.15.0-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/wrapt/#wrapt1150-gfbf-2023a","title":"wrapt/1.15.0-gfbf-2023a","text":"This is a list of extensions included in the module:
wrapt-1.15.0
"},{"location":"available_software/detail/x264/","title":"x264","text":"x264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
https://www.videolan.org/developers/x264.html
"},{"location":"available_software/detail/x264/#available-modules","title":"Available modules","text":"The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using x264, load one of these modules using a module load
command like:
module load x264/20230226-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 x264/20230226-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/x265/","title":"x265","text":"x265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
https://x265.org/
"},{"location":"available_software/detail/x265/#available-modules","title":"Available modules","text":"The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using x265, load one of these modules using a module load
command like:
module load x265/3.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 x265/3.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/xorg-macros/","title":"xorg-macros","text":"X.org macros utilities.
https://gitlab.freedesktop.org/xorg/util/macros
"},{"location":"available_software/detail/xorg-macros/#available-modules","title":"Available modules","text":"The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using xorg-macros, load one of these modules using a module load
command like:
module load xorg-macros/1.20.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/xxd/","title":"xxd","text":"xxd is part of the VIM package and this will only install xxd, not vim!xxd converts to/from hexdumps of binary files.
https://www.vim.org
"},{"location":"available_software/detail/xxd/#available-modules","title":"Available modules","text":"The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using xxd, load one of these modules using a module load
command like:
module load xxd/9.0.2112-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxd/9.0.2112-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/zstd/","title":"zstd","text":"Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
https://facebook.github.io/zstd
"},{"location":"available_software/detail/zstd/#available-modules","title":"Available modules","text":"The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using zstd, load one of these modules using a module load
command like:
module load zstd/1.5.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 zstd/1.5.5-GCCcore-13.2.0 x x x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"blog/","title":"Blog","text":""},{"location":"blog/2024/05/17/isc24/","title":"EESSI promo tour @ ISC'24 (May 2024, Hamburg)","text":"This week, we had the privilege of attending the ISC'24 conference in the beautiful city of Hamburg, Germany. This was an excellent opportunity for us to showcase EESSI, and gain valuable insights and feedback from the HPC community.
"},{"location":"blog/2024/05/17/isc24/#bof-session-on-eessi","title":"BoF session on EESSI","text":"The EESSI Birds-of-a-Feather (BoF) session on Tuesday morning, part of the official ISC'24 program, was the highlight of our activities in Hamburg.
It was well attended, with well over 100 people joining us at 9am.
During this session, we introduced the EESSI project with a short presentation, followed by a well-received live hands-on demo of installing and using EESSI by spinning up an \"empty\" Linux virtual machine instance in Amazon EC2 and getting optimized installations of popular scientific applications like GROMACS and TensorFlow running in a matter of minutes.
During the second part of the BoF session, we engaged with the audience through an interactive poll and by letting attendees ask questions.
The presentation slides, including the results of the interactive poll and questions that were raised by attendees, are available here.
"},{"location":"blog/2024/05/17/isc24/#workshops","title":"Workshops","text":"During the last day of ISC'24, EESSI was present in no less than three different workshops.
"},{"location":"blog/2024/05/17/isc24/#risc-v-workshop","title":"RISC-V workshop","text":"At the Fourth International workshop on RISC-V for HPC, Juli\u00e1n Morillo (BSC) presented our paper \"Preparing to Hit the Ground Running: Adding RISC-V support to EESSI\" (slides available here).
Juli\u00e1n covered the initial work that was done in the scope of the MultiXscale EuroHPC Centre-of-Excellence to add support for RISC-V to EESSI, outlined the challenges we encountered, and shared the lessons we have learned along the way.
"},{"location":"blog/2024/05/17/isc24/#ahug-workshop","title":"AHUG workshop","text":"During the Arm HPC User Group (AHUG) workshop, Kenneth Hoste (HPC-UGent) gave a talk entitled \"Extending Arm\u2019s Reach by Going EESSI\" (slides available here).
Next to a high-level introduction to EESSI, we briefly covered some of the challenges we encountered when testing the optimized software installations that we had built for the Arm Neoverse V1 microarchitecture, including bugs in OpenMPI and GROMACS.
Kenneth gave a live demonstration of how to get access to EESSI and start running the optimized software installations we provide through our CernVM-FS repository on a fresh AWS Graviton 3 instance in a matter of minutes.
"},{"location":"blog/2024/05/17/isc24/#pop-workshop","title":"POP workshop","text":"In the afternoon on Thursday, Lara Peeters (HPC-UGent) presented MultiXscale during the Readiness of HPC Extreme-scale Applications workshop, which was organised by the POP EuroHPC Centre-of-Excellence (slides available here).
Lara outlined the pilot use cases on which MultiXscale focuses, and explained how EESSI helps to achieve the goals of MultiXscale in terms of Productivity, Performance, and Portability.
At the end of the workshop, a group picture was taken with both organisers and speakers, which was a great way to wrap up a busy week in Hamburg!
"},{"location":"blog/2024/05/17/isc24/#talks-and-demos-on-eessi-at-exhibit","title":"Talks and demos on EESSI at exhibit","text":"Not only was EESSI part of the official ISC'24 program via a dedicated BoF session and various workshops: we were also prominently present on the exhibit floor.
"},{"location":"blog/2024/05/17/isc24/#microsoft-azure-booth","title":"Microsoft Azure booth","text":"Microsoft Azure invited us to give a 1-hour introductory presentation on EESSI on both Monday and Wednesday at their booth during the ISC'24 exhibit, as well as to provide live demonstrations at the demo corner of their booth on Tuesday afternoon on how to get access to EESSI and the user experience it provides.
Exhibit attendees were welcome to pass by and ask questions, and did so throughout the full 4 hours we were present there.
Both Microsoft Azure and AWS have been graciously providing resources in their cloud infrastructure free-of-cost for developing, testing, and demonstrating EESSI for several years now.
"},{"location":"blog/2024/05/17/isc24/#eurohpc-booth","title":"EuroHPC booth","text":"The MultiXscale EuroHPC Centre-of-Excellence we are actively involved in, and through which the development of EESSI is being co-funded since Jan'23, was invited by the EuroHPC JU to present the goals and preliminary achievements at their booth.
Elisabeth Ortega (HPCNow!) did the honours to give the last talk at the EuroHPC JU booth of the ISC'24 exhibit.
"},{"location":"blog/2024/05/17/isc24/#stickers","title":"Stickers!","text":"Last but not least: we handed out a boatload free stickers with the logo of both MultiXscale and EESSI itself, as well as of various of the open source software projects we leverage, including EasyBuild, Lmod, and CernVM-FS.
We have mostly exhausted our sticker collection during ISC'24, but don't worry: we will make sure we have more available at upcoming events...
"},{"location":"filesystem_layer/stratum1/","title":"Setting up a Stratum 1","text":"Setting up a Stratum 1 involves the following steps:
- set up the Stratum 1, preferably by running the Ansible playbook that we provide;
- request a Stratum 0 firewall exception for your Stratum 1 server;
- request a
<your site>.stratum1.cvmfs.eessi-infra.org
DNS entry; - open a pull request to include the URL to your Stratum 1 in the EESSI configuration.
The last two steps can be skipped if you want to host a \"private\" Stratum 1 for your site.
"},{"location":"filesystem_layer/stratum1/#requirements-for-a-stratum-1","title":"Requirements for a Stratum 1","text":"The main requirements for a Stratum 1 server are a good network connection to the clients it is going to serve, and sufficient disk space. For the EESSI repository, a few hundred gigabytes should suffice, but for production environments at least 1 TB would be recommended.
In terms of cores and memory, a machine with just a few (~4) cores and 4-8 GB of memory should suffice.
Various Linux distributions are supported, but we recommend one based on RHEL 7 or 8.
Finally, make sure that ports 80 (for the Apache web server) and 8000 are open.
"},{"location":"filesystem_layer/stratum1/#step-1-set-up-the-stratum-1","title":"Step 1: set up the Stratum 1","text":"The recommended way for setting up an EESSI Stratum 1 is by running the Ansible playbook stratum1.yml
from the filesystem-layer repository on GitHub.
Installing a Stratum 1 requires a GEO API license key, which will be used to find the (geographically) closest Stratum 1 server for your client and proxies. More information on how to (freely) obtain this key is available in the CVMFS documentation: https://cvmfs.readthedocs.io/en/stable/cpt-replica.html#geo-api-setup.
You can put your license key in the local configuration file inventory/local_site_specific_vars.yml
.
Furthermore, the Stratum 1 runs a Squid server. The template configuration file can be found at templates/eessi_stratum1_squid.conf.j2
. If you want to customize it, for instance for limiting the access to the Stratum 1, you can make your own version of this template file and point to it by setting local_stratum1_cvmfs_squid_conf_src
in inventory/local_site_specific_vars.yml
. See the comments in the example file for more details.
Start by installing Ansible:
sudo yum install -y ansible\n
Then install Ansible roles for EESSI:
ansible-galaxy role install -r requirements.yml -p ./roles --force\n
Make sure you have enough space in /srv
(on the Stratum 1) since the snapshot of the Stratum 0 will end up there by default. To alter the directory where the snapshot gets copied to you can add this variable in inventory/host_vars/<url-or-ip-to-your-stratum1>
:
cvmfs_srv_mount: /srv\n
Make sure that you have added the hostname or IP address of your server to the inventory/hosts
file. Finally, install the Stratum 1 using one of the two following options.
Option 1:
# -b to run as root, optionally use -K if a sudo password is required\nansible-playbook -b [-K] -e @inventory/local_site_specific_vars.yml stratum1.yml\n
Option2:
Create a ssh key pair and make sure the ansible-host-keys.pub
is in the $HOME/.ssh/authorized_keys
file on your Stratum 1 server.
ssh-keygen -b 2048 -t rsa -f ~/.ssh/ansible-host-keys -q -N \"\"\n
Then run the playbook:
ansible-playbook -b --private-key ~/.ssh/ansible-host-keys -e @inventory/local_site_specific_vars.yml stratum1.yml\n
Running the playbook will automatically make replicas of all the repositories defined in group_vars/all.yml
.
"},{"location":"filesystem_layer/stratum1/#step-2-request-a-firewall-exception","title":"Step 2: request a firewall exception","text":"(This step is not implemented yet and can be skipped)
You can request a firewall exception rule to be added for your Stratum 1 server by opening an issue on the GitHub page of the filesystem layer repository.
Make sure to include the IP address of your server.
"},{"location":"filesystem_layer/stratum1/#step-3-verification-of-the-stratum-1","title":"Step 3: Verification of the Stratum 1","text":"When the playbook has finished your Stratum 1 should be ready. In order to test your Stratum 1, even without a client installed, you can use curl
.
curl --head http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io/.cvmfspublished\n
This should return: HTTP/1.1 200 OK\n...\nX-Cache: MISS from <url-or-ip-to-your-stratum1>\n
The second time you run it, you should get a cache hit:
X-Cache: HIT from <url-or-ip-to-your-stratum1>\n
Example with the Norwegian Stratum 1:
curl --head http://bgo-no.stratum1.cvmfs.eessi-infra.org/cvmfs/software.eessi.io/.cvmfspublished\n
You can also test access to your Stratum 1 from a client, for which you will have to install the CVMFS client.
Then run the following command to add your newly created Stratum 1 to the existing list of EESSI Stratum 1 servers by creating a local CVMFS configuration file:
echo 'CVMFS_SERVER_URL=\"http://<url-or-ip-to-your-stratum1>/cvmfs/@fqrn@;$CVMFS_SERVER_URL\"' | sudo tee -a /etc/cvmfs/domain.d/eessi-hpc.org.local\n
If this is the first time you set up the client you now run:
sudo cvmfs_config setup\n
If you already had configured the client before, you can simply reload the config:
sudo cvmfs_config reload -c software.eessi.io\n
Finally, verify that the client connects to your new Stratum 1 by running:
cvmfs_config stat -v software.eessi.io\n
Assuming that your new Stratum 1 is the geographically closest one to your client, this should return:
Connection: http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io through proxy DIRECT (online)\n
"},{"location":"filesystem_layer/stratum1/#step-4-request-an-eessi-dns-name","title":"Step 4: request an EESSI DNS name","text":"In order to keep the configuration clean and easy, all the EESSI Stratum 1 servers have a DNS name <your site>.stratum1.cvmfs.eessi-infra.org
, where <your site>
is often a short name or abbreviation followed by the country code (e.g. rug-nl
or bgo-no
). You can request this for your Stratum 1 by mentioning this in the issue that you created in Step 2, or by opening another issue.
"},{"location":"filesystem_layer/stratum1/#step-5-include-your-stratum-1-in-the-eessi-configuration","title":"Step 5: include your Stratum 1 in the EESSI configuration","text":"If you want to include your Stratum 1 in the EESSI configuration, i.e. allow any (nearby) client to be able to use it, you can open a pull request with updated configuration files. You will only have to add the URL to your Stratum 1 to the urls
list of the eessi_cvmfs_server_urls
variable in the all.yml
file.
"},{"location":"getting_access/eessi_container/","title":"EESSI container script","text":"The eessi_container.sh
script provides a very easy yet versatile means to access EESSI. It is the preferred method to start an EESSI container as it has support for many different scenarios via various options.
This page guides you through several example scenarios illustrating the use of the script.
"},{"location":"getting_access/eessi_container/#prerequisites","title":"Prerequisites","text":" - Apptainer 1.0.0 (or newer), or Singularity 3.7.x
- Check with
apptainer --version
or singularity --version
- Support for the
--fusemount
option in the shell
and run
subcommands is required
- Git
- Check with
git --version
"},{"location":"getting_access/eessi_container/#preparation","title":"Preparation","text":"Clone the EESSI/software-layer
repository and change into the software-layer
directory by running these commands:
git clone https://github.com/EESSI/software-layer.git\ncd software-layer\n
"},{"location":"getting_access/eessi_container/#quickstart","title":"Quickstart","text":"Run the eessi_container
script (from the software-layer
directory) to start a shell session in the EESSI container:
./eessi_container.sh\n
Note
Startup will take a bit longer the first time you run this because the container image is downloaded and converted.
You should see output like
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nApptainer> CernVM-FS: loading Fuse module... done\nCernVM-FS: loading Fuse module... done\n\nApptainer>\n
Note
You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS:
have been printed after the first prompt Apptainer>
was shown.
To start using EESSI, see Using EESSI/Setting up your environment.
"},{"location":"getting_access/eessi_container/#help-for-eessi_containersh","title":"Help for eessi_container.sh
","text":"The example in the Quickstart section facilitates an interactive session with read access to the EESSI software stack. It does not require any command line options, because the script eessi_container.sh
uses some carefully chosen defaults. To view all options of the script and its default values, run the command
./eessi_container.sh --help\n
You should see the following output usage: ./eessi_container.sh [OPTIONS] [[--] SCRIPT or COMMAND]\n OPTIONS:\n -a | --access {ro,rw} - ro (read-only), rw (read & write) [default: ro]\n -c | --container IMG - image file or URL defining the container to use\n [default: docker://ghcr.io/eessi/build-node:debian11]\n -g | --storage DIR - directory space on host machine (used for\n temporary data) [default: 1. TMPDIR, 2. /tmp]\n -h | --help - display this usage information [default: false]\n -i | --host-injections - directory to link to for host_injections \n [default: /..storage../opt-eessi]\n -l | --list-repos - list available repository identifiers [default: false]\n -m | --mode MODE - with MODE==shell (launch interactive shell) or\n MODE==run (run a script or command) [default: shell]\n -n | --nvidia MODE - configure the container to work with NVIDIA GPUs,\n MODE==install for a CUDA installation, MODE==run to\n attach a GPU, MODE==all for both [default: false]\n -r | --repository CFG - configuration file or identifier defining the\n repository to use [default: EESSI via\n container configuration]\n -u | --resume DIR/TGZ - resume a previous run from a directory or tarball,\n where DIR points to a previously used tmp directory\n (check for output 'Using DIR as tmp ...' of a previous\n run) and TGZ is the path to a tarball which is\n unpacked the tmp dir stored on the local storage space\n (see option --storage above) [default: not set]\n -s | --save DIR/TGZ - save contents of tmp directory to a tarball in\n directory DIR or provided with the fixed full path TGZ\n when a directory is provided, the format of the\n tarball's name will be {REPO_ID}-{TIMESTAMP}.tgz\n [default: not set]\n -v | --verbose - display more information [default: false]\n -x | --http-proxy URL - provides URL for the env variable http_proxy\n [default: not set]; uses env var $http_proxy if set\n -y | --https-proxy URL - provides URL for the env variable https_proxy\n [default: not set]; uses env var $https_proxy if set\n\n If value for --mode is 'run', the SCRIPT/COMMAND provided is executed. If\n arguments to the script/command start with '-' or '--', use the flag terminator\n '--' to let eessi_container.sh stop parsing arguments.\n
So, the defaults are equal to running the command
./eessi_container.sh --access ro --container docker://ghcr.io/eessi/build-node:debian11 --mode shell --repository EESSI\n
and it would either create a temporary directory under ${TMPDIR}
(if defined), or /tmp
(if ${TMPDIR}
is not defined). The remainder of this page will demonstrate different scenarios using some of the command line options used for read-only access.
Other options supported by the script will be discussed in a yet-to-be written section covering building software to be added to the EESSI stack.
"},{"location":"getting_access/eessi_container/#resuming-a-previous-session","title":"Resuming a previous session","text":"You may have noted the following line in the output of eessi_container.sh
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\n
Note
The parameter after --resume
(/tmp/eessi.abc123defg
) will be different when you run eessi_container.sh
.
Scroll back in your terminal and copy it so you can pass it to --resume
.
Try the following command to \"resume\" from the last session.
./eessi_container.sh --resume /tmp/eessi.abc123defg\n
This should run much faster because the container image has been cached in the temporary directory (/tmp/eessi.abc123defg
). You should get to the prompt (Apptainer>
or Singularity>
) and can use EESSI with the state where you left the previous session. Note
The state refers to what was stored on disk, not what was changed in memory. Particularly, any environment (variable) settings are not restored automatically.
Because the /tmp/eessi.abc123defg
directory contains a home
directory which includes the saved history of your last session, you can easily restore the environment (variable) settings. Type history
to see which commands you ran. You should be able to access the history as you would do in a normal terminal session.
"},{"location":"getting_access/eessi_container/#running-a-simple-command","title":"Running a simple command","text":"Let's \"ls /cvmfs/software.eessi.io
\" through the eessi_container.sh
script to check if the CernVM-FS EESSI repository is accessible:
./eessi_container.sh --mode run ls /cvmfs/software.eessi.io\n
You should see an output such as
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nhost_injections latest versions\n
Note that this time no interactive shell session is started in the container: only the provided command is run in the container, and when that finishes you are back in the shell session where you ran the eessi_container.sh
script.
This is because we used the --mode run
command line option.
Note
The last line in the output is the output of the ls
command, which shows the contents of the /cvmfs/software.eessi.io
directory.
Also, note that there is no shell prompt (Apptainer>
or Singularity
), since no interactive shell session is started in the container.
Alternatively to specify the command as we did above, you can also do the following.
CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh --mode shell <<< ${CMD}\n
Note
We changed the mode from run
to shell
because we use a different method to let the script run our command, by feeding it in via the stdin
input channel using <<<
.
Because shell
is the default value for --mode
we can also omit this and simply run
CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
"},{"location":"getting_access/eessi_container/#running-a-script","title":"Running a script","text":"While running simple command can be sufficient in some cases, you often want to run scripts containing multiple commands.
Let's run the script shown below.
First, copy-paste the contents for the script shown below, and create a file named eessi_architectures.sh
in your current directory. Also make the script executable, by running:
chmod +x eessi_architectures.sh\n
Here are the contents for the eessi_architectures.sh
script:
#!/usr/bin/env bash\n#\n# This script determines which architectures are included in the\n# latest EESSI version. It makes use of the specific directory\n# structure in the EESSI repository.\n#\n\n# determine list of available OS types\nBASE=${EESSI_CVMFS_REPO:-/cvmfs/software.eessi.io}/latest/software\ncd ${BASE}\nfor os_type in $(ls -d *)\ndo\n # determine architecture families\n OS_BASE=${BASE}/${os_type}\n cd ${OS_BASE}\n for arch_family in $(ls -d *)\n do\n # determine CPU microarchitectures\n OS_ARCH_BASE=${BASE}/${os_type}/${arch_family}\n cd ${OS_ARCH_BASE}\n for microarch in $(ls -d *)\n do\n case ${microarch} in\n amd | intel )\n for sub in $(ls ${microarch})\n do\n echo \"${os_type}/${arch_family}/${microarch}/${sub}\"\n done\n ;;\n * )\n echo \"${os_type}/${arch_family}/${microarch}\"\n ;;\n esac\n done\n done\ndone\n
Run the script as follows ./eessi_container.sh --mode shell < eessi_architectures.sh\n
The output should be similar to Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nlinux/aarch64/generic\nlinux/aarch64/graviton2\nlinux/aarch64/graviton3\nlinux/ppc64le/generic\nlinux/ppc64le/power9le\nlinux/x86_64/amd/zen2\nlinux/x86_64/amd/zen3\nlinux/x86_64/generic\nlinux/x86_64/intel/haswell\nlinux/x86_64/intel/skylake_avx512\n
Lines 6 to 15 show the output of the script eessi_architectures.sh
. If you want to use the mode run
, you have to make the script's location available inside the container.
This can be done by mapping the current directory (${PWD}
), which contains eessi_architectures.sh
, to any not-yet existing directory inside the container using the $SINGULARITY_BIND
or $APPTAINER_BIND
environment variable.
For example:
SINGULARITY_BIND=${PWD}:/scripts ./eessi_container.sh --mode run /scripts/eessi_architectures.sh\n
"},{"location":"getting_access/eessi_container/#running-scripts-or-commands-with-parameters-starting-with-or-","title":"Running scripts or commands with parameters starting with -
or --
","text":"Let's assume we would like to get more information about the entries of /cvmfs/software.eessi.io
. If we would just run
./eessi_container.sh --mode run ls -lH /cvmfs/software.eessi.io\n
we would get an error message such as ERROR: Unknown option: -lH\n
We can resolve this in two ways: - Using the
stdin
channel as described above, for example, by simply running CMD=\"ls -lH /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
which should result in the output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user 10 Jun 30 2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user 16 May 4 2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10 2021 versions\n
- Using the flag terminator
--
which tells eessi_container.sh
to stop parsing command line arguments. For example, ./eessi_container.sh --mode run -- ls -lH /cvmfs/software.eessi.io\n
which should result in the output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q run --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif ls -lH /cvmfs/software.eessi.io\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user 10 Jun 30 2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user 16 May 4 2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10 2021 versions\n
"},{"location":"getting_access/eessi_container/#running-eessi-demos","title":"Running EESSI demos","text":"For examples of scripts that use the software provided by EESSI, see Running EESSI demos.
"},{"location":"getting_access/eessi_container/#launching-containers-more-quickly","title":"Launching containers more quickly","text":"Subsequent runs of eessi_container.sh
may reuse temporary data of a previous session, which includes the pulled image of the container. However, that is not always what we want, i.e., reusing a previous session (and thereby launching the container more quickly).
The eessi_container.sh
script may (re)-use a cache directory provided via $SINGULARITY_CACHEDIR
(or $APPTAINER_CACHEDIR
when using Apptainer). Hence, the container image does not have to be downloaded again even when starting a new session. The example below illustrates this.
export SINGULARITY_CACHEDIR=${PWD}/container_cache_dir\ntime ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
which should produce output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections latest versions\n\nreal m40.445s\nuser 3m2.621s\nsys 0m7.402s\n
The next run using the same cache directory, e.g., by simply executing time ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
is much faster Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections latest versions\n\nreal 0m2.781s\nuser 0m0.172s\nsys 0m0.436s\n
Note
Each run of eessi_container.sh
(without specifying --resume
) creates a new temporary directory. The temporary directory stores, among other data, the image file of the container. Thus we can ensure that the container is available locally for a subsequent run.
However, this may quickly consume scarce resources, for example, a small partition where /tmp
is located (default for temporary storage, see --help
for specifying a different location).
See next section for making sure to clean up no longer needed temporary data.
"},{"location":"getting_access/eessi_container/#reducing-disk-usage","title":"Reducing disk usage","text":"By default eessi_container.sh
creates a temporary directory under /tmp
. The directories are named eessi.RANDOM
where RANDOM
is a 10-character string. The script does not automatically remove these directories. To determine their total disk usage, simply run
du -sch /tmp/eessi.*\n
which could result in output similar to 333M /tmp/eessi.session123\n333M /tmp/eessi.session456\n333M /tmp/eessi.session789\n997M total\n
Clean up disk usage by simply removing directories you do not need any longer."},{"location":"getting_access/eessi_container/#eessi-container-image","title":"EESSI container image","text":"If you would like to directly use an EESSI container image, you can do so by configuring apptainer
to correctly mount the CVMFS repository:
# honor $TMPDIR if it is already defined, use /tmp otherwise\nif [ -z $TMPDIR ]; then\n export WORKDIR=/tmp/$USER\nelse\n export WORKDIR=$TMPDIR/$USER\nfi\n\nmkdir -p ${WORKDIR}/{var-lib-cvmfs,var-run-cvmfs,home}\nexport SINGULARITY_BIND=\"${WORKDIR}/var-run-cvmfs:/var/run/cvmfs,${WORKDIR}/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"${WORKDIR}/home:/home/$USER\"\nexport EESSI_REPO=\"container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io\"\nexport EESSI_CONTAINER=\"docker://ghcr.io/eessi/client:centos7\"\nsingularity shell --fusemount \"$EESSI_REPO\" \"$EESSI_CONTAINER\"\n
"},{"location":"getting_access/is_eessi_accessible/","title":"Is EESSI accessible?","text":"EESSI can be accessed via a native (CernVM-FS) installation, or via a container that includes CernVM-FS.
Before you look into these options, check if EESSI is already accessible on your system.
Run the following command:
ls /cvmfs/software.eessi.io\n
Note
This ls
command may take a couple of seconds to finish, since CernVM-FS may need to download or update the metadata for that directory.
If you see output like shown below, you already have access to EESSI on your system.
host_injections latest versions\n
For starting to use EESSI, continue reading about Setting up environment.
If you see an error message as shown below, EESSI is not yet accessible on your system.
ls: /cvmfs/software.eessi.io: No such file or directory\n
No worries, you don't need to be a to get access to EESSI. Continue reading about the Native installation of EESSI, or access via the EESSI container.
"},{"location":"getting_access/native_installation/","title":"Native installation","text":"Setting up native access to EESSI, that is a system-wide deployment that does not require workarounds like using a container, requires the installation and configuration of CernVM-FS.
This requires admin privileges, since you need to install CernVM-FS as an OS package.
The following actions must be taken for a (basic) native installation of EESSI:
- Installing CernVM-FS itself, ideally using the OS packages provided by the CernVM-FS project (although installing from source is also possible);
- Installing the EESSI configuration for CernVM-FS, which can be done by installing the
cvmfs-config-eessi
package that we provide for the most popular Linux distributions (more information available here); - Creating a small client configuration file for CernVM-FS (
/etc/cvmfs/default.local
); see also the CernVM-FS documentation.
The good news is that all of this only requires a handful commands :
RHEL-based Linux distributionsDebian-based Linux distributions # Installation commands for RHEL-based distros like CentOS, Rocky Linux, Almalinux, Fedora, ...\n\n# install CernVM-FS\nsudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\nsudo yum install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nsudo yum install -y https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi-latest.noarch.rpm\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
Note
The commands above only cover the basic installation of EESSI.
This is good enough for an individual client, or for testing purposes, but for a production-quality setup you should also set up a Squid proxy cache.
For large-scale systems, like an HPC cluster, you should also consider setting up your own CernVM-FS Stratum-1 mirror server.
For more details on this, please refer to the Stratum 1 and proxies section of the CernVM-FS tutorial.
"},{"location":"known_issues/eessi-2023.06/","title":"Known issues","text":""},{"location":"known_issues/eessi-2023.06/#eessi-production-repository-v202306","title":"EESSI Production Repository (v2023.06)","text":""},{"location":"known_issues/eessi-2023.06/#failed-to-modify-ud-qp-to-init-on-mlx5_0-operation-not-permitted","title":"Failed to modify UD QP to INIT on mlx5_0: Operation not permitted
","text":"This is an error that occurs with OpenMPI after updating to OFED 23.10.
Their is an upstream issue on this problem opened with EasyBuild. See: https://github.com/easybuilders/easybuild-easyconfigs/issues/20233
Workarounds You can instruct OpenMPI to not use libfabric and turn off `uct`(see https://openucx.readthedocs.io/en/master/running.html#running-mpi) by passing the following options to `mpirun`:
mpirun -mca pml ucx -mca btl '^uct,ofi' -mca mtl '^ofi'\n
Or equivalently, you can set the following environment variables: export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
"},{"location":"meetings/2022-09-amsterdam/","title":"EESSI Community Meeting (Sept'22, Amsterdam)","text":""},{"location":"meetings/2022-09-amsterdam/#practical-info","title":"Practical info","text":" - dates: Wed-Fri 14-16 Sept'22
- in conjunction with CernVM workshop @ Nikhef (Mon-Tue 12-13 Sept'22)
- venue: \"Polderzaal\" at Cafe-Restaurant Polder (Google Maps), sponsored by SURF
- registration (closed since Fri 9 Sept'22)
- Slack channel:
community-meeting-2022
in EESSI Slack - YouTube playlist with recorded talks
"},{"location":"meetings/2022-09-amsterdam/#agenda","title":"Agenda","text":"(subject to changes)
We envision a mix of presentations, experience reports, demos, and hands-on sessions and/or hackathons related to the EESSI project.
If you would like to give a talk or host a session, please let us know via the EESSI Slack!
"},{"location":"meetings/2022-09-amsterdam/#wed-14-sept-2022","title":"Wed 14 Sept 2022","text":" - [10:00-13:00] Welcome session
- [10:00-10:30] Walk-in, coffee
- [10:30-12:00] Round table discussion (not live-streamed!)
- [12:00-13:00] Lunch
- [13:00-15:00] Presentations on EESSI
- [13:00-13:30] Introduction to EESSI (Caspar) [slides - recording]
- [13:30-14:00] Hands-on: how to use EESSI (Kenneth) [slides - recording]
- [14:00-14:30] EESSI use cases (Kenneth) [(slides - recording]
- [14:30-15:00] EESSI for sysadmins (Thomas) [slides - recording]
- [15:00-15:30] Coffee break
- [15:30-17:00] Presentations on EESSI (continued)
- [15:30-16:00] Hands-on: installing EESSI (Thomas/Kenneth)
- [16:00-16:45] ComputeCanada site talk (Bart Oldeman, remote) [slides - recording]
- [16:45-17:15] Magic Castle (Felix-Antoine Fortin, remote) [slides - recording]
- [19:00-...] Group dinner @ Saravanaa Bhavan (sponsored by Dell Technologies)
- address: Stadhouderskade 123-124, Amsterdam
"},{"location":"meetings/2022-09-amsterdam/#thu-15-sept-2022","title":"Thu 15 Sept 2022","text":" - [09:30-12:00] More focused presentations on aspects of EESSI
- [09:30-10:00] EESSI behind the scenes: compat layer (Bob) [slides - recording]
- [10:00-10:30] EESSI behind the scenes: software layer (Kenneth) [slides - recording]
- [10:30-11:00] Coffee break
- [11:00-11:30] EESSI behind the scenes: infrastructure (Terje) [slides - recording]
- [11:30-12:00] Status on RISC-V support (Kenneth) [slides - recording]
- [12:00-13:00] Lunch
- [13:00-14:00] Discussions/hands-on sessions/hackathon
- [14:00-14:30] Status on GPU support (Alan) [slides - recording]
- [14:30-15:00] Status on build-and-deploy bot (Thomas) [slides - recording]
- [15:00-15:30] Coffee break
- [15:30-17:00] Discussions/hands-on sessions/hackathon (continued)
- Hands-on with GPUs (Alan)
- Hands-on with bot (Thomas/Kenneth)
- [19:00-...] Group dinner @ Italia Oggi (sponsored by HPC-UGent)
- address: Binnen Bantammerstraat 11, Amsterdam
"},{"location":"meetings/2022-09-amsterdam/#fri-16-sept-2022","title":"Fri 16 Sept 2022","text":" - [09:30-12:00] Presentations on future work
- [09:30-10:00] Testing in software layer (Caspar) [slides - recording]
- [10:00-10:30] MultiXscale project (Alan) [slides - recording]
- [10:30-11:00] Coffee break
- [11:00-11:30] Short-term future work (Kenneth) [slides - recording]
- [11:30-12:00] Discussion: future management structure of EESSI (Alan) [slides - recording]
- [12:00-13:00] Lunch
- [13:00-14:00] Site reports [recording]
- NESSI (Thomas) [slides]
- NLPL (Stephan) [slides]
- HPCNow! (Danilo) [slides]
- Azure (Hugo) [slides]
- [14:00-14:30] Discussion: what would make or break EESSI for your site? (notes - recording)
- [14:30-15:45] Discussions/hands-on sessions/hackathon
- Hands-on with GPU support (Alan)
- Hands-on with bot (Thomas/Kenneth)
- Hands-on with software testing (Caspar)
- We need to leave the room by 16:00!
"},{"location":"repositories/pilot/","title":"Pilot","text":""},{"location":"repositories/pilot/#pilot-software-stack-202112","title":"Pilot software stack (2021.12)","text":""},{"location":"repositories/pilot/#caveats","title":"Caveats","text":"Danger
The EESSI pilot repository is no longer actively maintained, and should not be used for production work.
Please use the software.eessi.io
repository instead.
The current EESSI pilot software stack (version 2021.12) is the 7th iteration, and there are some known issues and limitations, please take these into account:
- First of all: the EESSI pilot software stack is NOT READY FOR PRODUCTION!
Do not use it for production work, and be careful when testing it on production systems!
"},{"location":"repositories/pilot/#reporting-problems","title":"Reporting problems","text":"If you notice any problems, please report them via https://github.com/EESSI/software-layer/issues.
"},{"location":"repositories/pilot/#accessing-the-eessi-pilot-repository-through-singularity","title":"Accessing the EESSI pilot repository through Singularity","text":"The easiest way to access the EESSI pilot repository is by using Singularity. If Singularity is installed already, no admin privileges are required. No other software is needed either on the host.
A container image is available in the GitHub Container Registry (see https://github.com/EESSI/filesystem-layer/pkgs/container/client-pilot). It only contains a minimal operating system + the necessary packages to access the EESSI pilot repository through CernVM-FS, and it is suitable for aarch64
, ppc64le
, and x86_64
.
The container image can be used directly by Singularity (no prior download required), as follows:
-
First, create some local directories in /tmp/$USER
which will be bind mounted in the container:
mkdir -p /tmp/$USER/{var-lib-cvmfs,var-run-cvmfs,home}\n
These provides space for the CernVM-FS cache, and an empty home directory to use in the container. -
Set the $SINGULARITY_BIND
and $SINGULARITY_HOME
environment variables to configure Singularity:
export SINGULARITY_BIND=\"/tmp/$USER/var-run-cvmfs:/var/run/cvmfs,/tmp/$USER/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"/tmp/$USER/home:/home/$USER\"\n
-
Start the container using singularity shell
, using --fusemount
to mount the EESSI pilot repository (using the cvmfs2
command that is included in the container image):
export EESSI_PILOT=\"container:cvmfs2 pilot.eessi-hpc.org /cvmfs/pilot.eessi-hpc.org\"\nsingularity shell --fusemount \"$EESSI_PILOT\" docker://ghcr.io/eessi/client-pilot:centos7\n
-
This should give you a shell in the container, where the EESSI pilot repository is mounted:
$ singularity shell --fusemount \"$EESSI_PILOT\" docker://ghcr.io/eessi/client-pilot:centos7\nINFO: Using cached SIF image\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nSingularity>\n
- It is possible that you see some scary looking warnings, but those can be ignored for now.
To verify that things are working, check the contents of the /cvmfs/pilot.eessi-hpc.org/versions/2021.12
directory:
Singularity> ls /cvmfs/pilot.eessi-hpc.org/versions/2021.12\ncompat init software\n
"},{"location":"repositories/pilot/#standard-installation","title":"Standard installation","text":"For those with privileges on their system, there are a number of example installation scripts for different architectures and operating systems available in the EESSI demo repository.
Here we prefer the Singularity approach as we can guarantee that the container image is up to date.
"},{"location":"repositories/pilot/#setting-up-the-eessi-environment","title":"Setting up the EESSI environment","text":"Once you have the EESSI pilot repository mounted, you can set up the environment by sourcing the provided init script:
source /cvmfs/pilot.eessi-hpc.org/versions/2021.12/init/bash\n
If all goes well, you should see output like this:
Found EESSI pilot repo @ /cvmfs/pilot.eessi-hpc.org/versions/2021.12!\nUsing x86_64/intel/haswell as software subdirectory.\nUsing /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI pilot software stack, have fun!\n[EESSI pilot 2021.12] $ \n
Now you're all set up! Go ahead and explore the software stack using \"module avail
\", and go wild with testing the available software installations!
"},{"location":"repositories/pilot/#testing-the-eessi-pilot-software-stack","title":"Testing the EESSI pilot software stack","text":"Please test the EESSI pilot software stack as you see fit: running simple commands, performing small calculations or running small benchmarks, etc.
Test scripts that have been verified to work correctly using the pilot software stack are available at https://github.com/EESSI/software-layer/tree/main/tests .
"},{"location":"repositories/pilot/#giving-feedback-or-reporting-problems","title":"Giving feedback or reporting problems","text":"Any feedback is welcome, and questions or problems reports are welcome as well, through one of the EESSI communication channels:
- (preferred!) EESSI
software-layer
GitHub repository: https://github.com/EESSI/software-layer/issues - EESSI mailing list (
eessi@list.rug.nl
) - EESSI Slack: https://eessi-hpc.slack.com (get an invite via https://www.eessi-hpc.org/join)
- monthly EESSI meetings (first Thursday of the month at 2pm CEST)
"},{"location":"repositories/pilot/#available-software","title":"Available software","text":"(last update: Mar 21st 2022)
EESSI currently supports the following HPC applications as well as all their dependencies:
- GROMACS (2020.1 and 2020.4)
- OpenFOAM (v2006 and 8)
- R (4.0.0) + R-bundle-Bioconductor (3.11) + RStudio Server (1.3.1093)
- TensorFlow (2.3.1) and Horovod (0.21.3)
- OSU-Micro-Benchmarks (5.6.3)
- ReFrame (3.9.1)
- Spark (3.1.1)
- IPython (7.15.0)
- QuantumESPRESSO (6.6) (currently not available on
ppc64le
) - WRF (3.9.1.1)
[EESSI pilot 2021.12] $ module --nx avail\n\n--------------------------- /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all ----------------------------\n ant/1.10.8-Java-11 LMDB/0.9.24-GCCcore-9.3.0\n Arrow/0.17.1-foss-2020a-Python-3.8.2 lz4/1.9.2-GCCcore-9.3.0\n Bazel/3.6.0-GCCcore-9.3.0 Mako/1.1.2-GCCcore-9.3.0\n Bison/3.5.3-GCCcore-9.3.0 MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n Boost/1.72.0-gompi-2020a matplotlib/3.2.1-foss-2020a-Python-3.8.2\n cairo/1.16.0-GCCcore-9.3.0 Mesa/20.0.2-GCCcore-9.3.0\n CGAL/4.14.3-gompi-2020a-Python-3.8.2 Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2\n CMake/3.16.4-GCCcore-9.3.0 METIS/5.1.0-GCCcore-9.3.0\n CMake/3.20.1-GCCcore-10.3.0 MPFR/4.0.2-GCCcore-9.3.0\n code-server/3.7.3 NASM/2.14.02-GCCcore-9.3.0\n DB/18.1.32-GCCcore-9.3.0 ncdf4/1.17-foss-2020a-R-4.0.0\n DB/18.1.40-GCCcore-10.3.0 netCDF-Fortran/4.5.2-gompi-2020a\n double-conversion/3.1.5-GCCcore-9.3.0 netCDF/4.7.4-gompi-2020a\n Doxygen/1.8.17-GCCcore-9.3.0 nettle/3.6-GCCcore-9.3.0\n EasyBuild/4.5.0 networkx/2.4-foss-2020a-Python-3.8.2\n EasyBuild/4.5.1 (D) Ninja/1.10.0-GCCcore-9.3.0\n Eigen/3.3.7-GCCcore-9.3.0 NLopt/2.6.1-GCCcore-9.3.0\n Eigen/3.3.9-GCCcore-10.3.0 NSPR/4.25-GCCcore-9.3.0\n ELPA/2019.11.001-foss-2020a NSS/3.51-GCCcore-9.3.0\n expat/2.2.9-GCCcore-9.3.0 nsync/1.24.0-GCCcore-9.3.0\n expat/2.2.9-GCCcore-10.3.0 numactl/2.0.13-GCCcore-9.3.0\n FFmpeg/4.2.2-GCCcore-9.3.0 numactl/2.0.14-GCCcore-10.3.0\n FFTW/3.3.8-gompi-2020a OpenBLAS/0.3.9-GCC-9.3.0\n FFTW/3.3.9-gompi-2021a OpenBLAS/0.3.15-GCC-10.3.0\n flatbuffers/1.12.0-GCCcore-9.3.0 OpenFOAM/v2006-foss-2020a\n FlexiBLAS/3.0.4-GCC-10.3.0 OpenFOAM/8-foss-2020a (D)\n fontconfig/2.13.92-GCCcore-9.3.0 OpenMPI/4.0.3-GCC-9.3.0\n foss/2020a OpenMPI/4.1.1-GCC-10.3.0\n foss/2021a OpenPGM/5.2.122-GCCcore-9.3.0\n freetype/2.10.1-GCCcore-9.3.0 OpenSSL/1.1 (D)\n FriBidi/1.0.9-GCCcore-9.3.0 OSU-Micro-Benchmarks/5.6.3-gompi-2020a\n GCC/9.3.0 Pango/1.44.7-GCCcore-9.3.0\n GCC/10.3.0 ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi\n GCCcore/9.3.0 PCRE/8.44-GCCcore-9.3.0\n GCCcore/10.3.0 PCRE2/10.34-GCCcore-9.3.0\n Ghostscript/9.52-GCCcore-9.3.0 Perl/5.30.2-GCCcore-9.3.0\n giflib/5.2.1-GCCcore-9.3.0 Perl/5.32.1-GCCcore-10.3.0\n git/2.23.0-GCCcore-9.3.0-nodocs pixman/0.38.4-GCCcore-9.3.0\n git/2.32.0-GCCcore-10.3.0-nodocs (D) pkg-config/0.29.2-GCCcore-9.3.0\n GLib/2.64.1-GCCcore-9.3.0 pkg-config/0.29.2-GCCcore-10.3.0\n GLPK/4.65-GCCcore-9.3.0 pkg-config/0.29.2 (D)\n GMP/6.2.0-GCCcore-9.3.0 pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2\n GMP/6.2.1-GCCcore-10.3.0 PMIx/3.1.5-GCCcore-9.3.0\n gnuplot/5.2.8-GCCcore-9.3.0 PMIx/3.2.3-GCCcore-10.3.0\n GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2\n gompi/2020a protobuf-python/3.13.0-foss-2020a-Python-3.8.2\n gompi/2021a protobuf/3.13.0-GCCcore-9.3.0\n groff/1.22.4-GCCcore-9.3.0 pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2\n groff/1.22.4-GCCcore-10.3.0 pybind11/2.6.2-GCCcore-10.3.0\n GROMACS/2020.1-foss-2020a-Python-3.8.2 Python/2.7.18-GCCcore-9.3.0\n GROMACS/2020.4-foss-2020a-Python-3.8.2 (D) Python/3.8.2-GCCcore-9.3.0\n GSL/2.6-GCC-9.3.0 Python/3.9.5-GCCcore-10.3.0-bare\n gzip/1.10-GCCcore-9.3.0 Python/3.9.5-GCCcore-10.3.0\n h5py/2.10.0-foss-2020a-Python-3.8.2 PyYAML/5.3-GCCcore-9.3.0\n HarfBuzz/2.6.4-GCCcore-9.3.0 Qt5/5.14.1-GCCcore-9.3.0\n HDF5/1.10.6-gompi-2020a QuantumESPRESSO/6.6-foss-2020a\n Horovod/0.21.3-foss-2020a-TensorFlow-2.3.1-Python-3.8.2 R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n hwloc/2.2.0-GCCcore-9.3.0 R/4.0.0-foss-2020a\n hwloc/2.4.1-GCCcore-10.3.0 re2c/1.3-GCCcore-9.3.0\n hypothesis/6.13.1-GCCcore-10.3.0 RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n ICU/66.1-GCCcore-9.3.0 Rust/1.52.1-GCCcore-10.3.0\n ImageMagick/7.0.10-1-GCCcore-9.3.0 ScaLAPACK/2.1.0-gompi-2020a\n IPython/7.15.0-foss-2020a-Python-3.8.2 ScaLAPACK/2.1.0-gompi-2021a-fb\n JasPer/2.0.14-GCCcore-9.3.0 scikit-build/0.10.0-foss-2020a-Python-3.8.2\n Java/11.0.2 (11) SciPy-bundle/2020.03-foss-2020a-Python-3.8.2\n jbigkit/2.1-GCCcore-9.3.0 SciPy-bundle/2021.05-foss-2021a\n JsonCpp/1.9.4-GCCcore-9.3.0 SCOTCH/6.0.9-gompi-2020a\n LAME/3.100-GCCcore-9.3.0 snappy/1.1.8-GCCcore-9.3.0\n libarchive/3.5.1-GCCcore-10.3.0 Spark/3.1.1-foss-2020a-Python-3.8.2\n libcerf/1.13-GCCcore-9.3.0 SQLite/3.31.1-GCCcore-9.3.0\n libdrm/2.4.100-GCCcore-9.3.0 SQLite/3.35.4-GCCcore-10.3.0\n libevent/2.1.11-GCCcore-9.3.0 SWIG/4.0.1-GCCcore-9.3.0\n libevent/2.1.12-GCCcore-10.3.0 Szip/2.1.1-GCCcore-9.3.0\n libfabric/1.11.0-GCCcore-9.3.0 Tcl/8.6.10-GCCcore-9.3.0\n libfabric/1.12.1-GCCcore-10.3.0 Tcl/8.6.11-GCCcore-10.3.0\n libffi/3.3-GCCcore-9.3.0 tcsh/6.22.02-GCCcore-9.3.0\n libffi/3.3-GCCcore-10.3.0 TensorFlow/2.3.1-foss-2020a-Python-3.8.2\n libgd/2.3.0-GCCcore-9.3.0 time/1.9-GCCcore-9.3.0\n libGLU/9.0.1-GCCcore-9.3.0 Tk/8.6.10-GCCcore-9.3.0\n libglvnd/1.2.0-GCCcore-9.3.0 Tkinter/3.8.2-GCCcore-9.3.0\n libiconv/1.16-GCCcore-9.3.0 UCX/1.8.0-GCCcore-9.3.0\n libjpeg-turbo/2.0.4-GCCcore-9.3.0 UCX/1.10.0-GCCcore-10.3.0\n libpciaccess/0.16-GCCcore-9.3.0 UDUNITS/2.2.26-foss-2020a\n libpciaccess/0.16-GCCcore-10.3.0 UnZip/6.0-GCCcore-9.3.0\n libpng/1.6.37-GCCcore-9.3.0 UnZip/6.0-GCCcore-10.3.0\n libsndfile/1.0.28-GCCcore-9.3.0 WRF/3.9.1.1-foss-2020a-dmpar\n libsodium/1.0.18-GCCcore-9.3.0 X11/20200222-GCCcore-9.3.0\n LibTIFF/4.1.0-GCCcore-9.3.0 x264/20191217-GCCcore-9.3.0\n libtirpc/1.2.6-GCCcore-9.3.0 x265/3.3-GCCcore-9.3.0\n libunwind/1.3.1-GCCcore-9.3.0 xorg-macros/1.19.2-GCCcore-9.3.0\n libxc/4.3.4-GCC-9.3.0 xorg-macros/1.19.3-GCCcore-10.3.0\n libxml2/2.9.10-GCCcore-9.3.0 Xvfb/1.20.9-GCCcore-9.3.0\n libxml2/2.9.10-GCCcore-10.3.0 Yasm/1.3.0-GCCcore-9.3.0\n libyaml/0.2.2-GCCcore-9.3.0 ZeroMQ/4.3.2-GCCcore-9.3.0\n LittleCMS/2.9-GCCcore-9.3.0 Zip/3.0-GCCcore-9.3.0\n LLVM/9.0.1-GCCcore-9.3.0 zstd/1.4.4-GCCcore-9.3.0\n
"},{"location":"repositories/pilot/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":""},{"location":"repositories/pilot/#x86_64","title":"x86_64","text":" - generic (currently implies
march=x86-64
and -mtune=generic
) - AMD
- zen2 (Rome)
- zen3 (Milan)
- Intel
- haswell
- skylake_avx512
"},{"location":"repositories/pilot/#aarch64arm64","title":"aarch64/arm64","text":" - generic (currently implies
-march=armv8-a
and -mtune=generic
) - AWS Graviton2
"},{"location":"repositories/pilot/#ppc64le","title":"ppc64le","text":" - generic
- power9le
"},{"location":"repositories/pilot/#easybuild-configuration","title":"EasyBuild configuration","text":"EasyBuild v4.5.1 was used to install the software in the 2021.12
version of the pilot repository. For some installations pull requests with changes that will be included in later EasyBuild versions were leveraged, see the build script that was used.
An example configuration of the build environment based on https://github.com/EESSI/software-layer can be seen here:
$ eb --show-config\n#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath (E) = /tmp/eessi-build/easybuild/build\ncontainerpath (E) = /tmp/eessi-build/easybuild/containers\ndebug (E) = True\nfilter-deps (E) = Autoconf, Automake, Autotools, binutils, bzip2, cURL, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars (E) = LD_LIBRARY_PATH\nhooks (E) = /home/eessi-build/software-layer/eb_hooks.py\nignore-osdeps (E) = True\ninstallpath (E) = /cvmfs/pilot.eessi-hpc.org/2021.06/software/linux/x86_64/intel/haswell\nmodule-extensions (E) = True\npackagepath (E) = /tmp/eessi-build/easybuild/packages\nprefix (E) = /tmp/eessi-build/easybuild\nrepositorypath (E) = /tmp/eessi-build/easybuild/ebfiles_repo\nrobot-paths (D) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/software/EasyBuild/4.5.1/easybuild/easyconfigs\nrpath (E) = True\nsourcepath (E) = /tmp/eessi-build/easybuild/sources:\nsysroot (E) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/compat/linux/x86_64\ntrace (E) = True\nzip-logs (E) = bzip2\n
"},{"location":"repositories/pilot/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for the pilot repository is shown at http://status.eessi.io/pilot/.
"},{"location":"repositories/riscv.eessi.io/","title":"EESSI RISC-V development repository (riscv.eessi.io
)","text":"This repository contains development versions of an EESSI RISC-V software stack. Note that versions may be added, modified, or deleted at any time.
"},{"location":"repositories/riscv.eessi.io/#accessing-the-risc-v-repository","title":"Accessing the RISC-V repository","text":"See Getting access; by making the EESSI CVMFS domain available, you will automatically have access to riscv.eessi.io
as well.
"},{"location":"repositories/riscv.eessi.io/#using-riscveessiio","title":"Using riscv.eessi.io
","text":"This repository currently offers one version (20240402), and this contains both a compatibility layer and a software layer. Furthermore, initialization scripts are in place to set up the repository:
$ source /cvmfs/riscv.eessi.io/versions/20240402/init/bash\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $\n
You can even source the initialization script of the software.eessi.io
production repository now, and it will automatically set up the RISC-V repository for you:
$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash \nRISC-V architecture detected, but there is no RISC-V support yet in the production repository.\nAutomatically switching to version 20240402 of the RISC-V development repository /cvmfs/riscv.eessi.io.\nFor more details about this repository, see https://www.eessi.io/docs/repositories/riscv.eessi.io/.\n\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all as the site extension directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $ \n
Note that we currently only provide generic builds, hence riscv64/generic
is being used for all RISC-V CPUs.
The amount of software is constantly increasing. Besides having the foss/2023b
toolchain available, applications like dlb, GROMACS, OSU Micro-Benchmarks, and R are already available as well. Use module avail
to get a full and up-to-date listing of available software.
"},{"location":"repositories/riscv.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for this repository is shown at https://status.eessi.io.
"},{"location":"repositories/software.eessi.io/","title":"Production EESSI repository (software.eessi.io
)","text":""},{"location":"repositories/software.eessi.io/#question-or-problems","title":"Question or problems","text":"If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request.
"},{"location":"repositories/software.eessi.io/#accessing-the-eessi-repository","title":"Accessing the EESSI repository","text":"See Getting access.
"},{"location":"repositories/software.eessi.io/#using-softwareeessiio","title":"Using software.eessi.io
","text":"See Using EESSI.
"},{"location":"repositories/software.eessi.io/#available-software","title":"Available software","text":"Detailed overview of available software coming soon!
For now, use module avail
after initializing the EESSI environment.
"},{"location":"repositories/software.eessi.io/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":"See CPU targets.
"},{"location":"repositories/software.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.
"},{"location":"software_layer/build_nodes/","title":"Build nodes","text":"Any system can be used as a build node to create additional software installations that should be added to the EESSI CernVM-FS repository.
"},{"location":"software_layer/build_nodes/#requirements","title":"Requirements","text":"OS and software:
- GNU/Linux (any distribution) as operating system;
- a recent version of Singularity (>= 3.6 is recommended);
- check with
singularity --version
screen
or tmux
is highly recommended;
Admin privileges are not required, as long as Singularity is installed.
Resources:
- 8 or more cores is recommended (though not strictly required);
- at least 50GB of free space on a local filesystem (like
/tmp
); - at least 16GB of memory (2GB/core or higher recommended);
Instructions to install Singularity and screen (click to show commands):
CentOS 8 (x86_64
or aarch64
or ppc64le
) sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\nsudo dnf update -y\nsudo dnf install -y screen singularity\n
"},{"location":"software_layer/build_nodes/#setting-up-the-container","title":"Setting up the container","text":"Warning
It is highly recommended to start a screen
or tmux
session first!
A container image is provided that includes everything that is required to set up a writable overlay on top of the EESSI CernVM-FS repository.
First, pick a location on a local filesystem for the temporary directory:
Requirements:
- Do not use a shared filesystem like NFS, Lustre or GPFS.
- There should be at least 50GB of free disk space in this local filesystem (more is better).
- There should be no automatic cleanup of old files via a cron job on this local filesystem.
- Try to make sure the directory is unique (not used by anything else).
NB. If you are going to install on a separate drive (due to lack of space on /), then you need to set some variables to point to that location. You will also need to bind mount it in the singularity
command. Let's say that you drive is mounted in /srt. Then you change the relevant commands below to this:
export EESSI_TMPDIR=/srt/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\nmkdir /srt/tmp\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs,/srt/tmp:/tmp\"\nsingularity shell -B /srt --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian11\n
We will assume that /tmp/$USER/EESSI
meets these requirements:
export EESSI_TMPDIR=/tmp/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\n
Create some subdirectories in this temporary directory:
mkdir -p $EESSI_TMPDIR/{home,overlay-upper,overlay-work}\nmkdir -p $EESSI_TMPDIR/{var-lib-cvmfs,var-run-cvmfs}\n
Configure Singularity cache directory, bind mounts, and (fake) home directory:
export SINGULARITY_CACHEDIR=$EESSI_TMPDIR/singularity_cache\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"$EESSI_TMPDIR/home:/home/$USER\"\n
Define values to pass to --fusemount` in
singularity`` command:
export EESSI_READONLY=\"container:cvmfs2 software.eessi.io /cvmfs_ro/software.eessi.io\"\nexport EESSI_WRITABLE_OVERLAY=\"container:fuse-overlayfs -o lowerdir=/cvmfs_ro/software.eessi.io -o upperdir=$EESSI_TMPDIR/overlay-upper -o workdir=$EESSI_TMPDIR/overlay-work /cvmfs/software.eessi.io\"\n
Start the container (which includes Debian 11, CernVM-FS and fuse-overlayfs):
singularity shell --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian10\n
Once the container image has been downloaded and converted to a Singularity image (SIF format), you should get a prompt like this:
...\nCernVM-FS: loading Fuse module... done\n\nSingularity>\n
and the EESSI CernVM-FS repository should be mounted:
Singularity> ls /cvmfs/software.eessi.io\nhost_injections README.eessi versions\n
"},{"location":"software_layer/build_nodes/#setting-up-the-environment","title":"Setting up the environment","text":"Set up the environment by starting a Gentoo Prefix session using the startprefix
command.
Make sure you use the correct version of the EESSI repository!
export EESSI_VERSION='2023.06' \n/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/compat/linux/$(uname -m)/startprefix\n
"},{"location":"software_layer/build_nodes/#installing-software","title":"Installing software","text":"Clone the software-layer repository:
git clone https://github.com/EESSI/software-layer.git\n
Run the software installation script in software-layer
:
cd software-layer\n./EESSI-install-software.sh\n
This script will figure out the CPU microarchitecture of the host automatically (like x86_64/intel/haswell
).
To build generic software installations (like x86_64/generic
), use the --generic
option:
./EESSI-install-software.sh --generic\n
Once all missing software has been installed, you should see a message like this:
No missing modules!\n
"},{"location":"software_layer/build_nodes/#creating-tarball-to-ingest","title":"Creating tarball to ingest","text":"Before tearing down the build node, you should create tarball to ingest into the EESSI CernVM-FS repository.
To create a tarball of all installations, assuming your build host is x86_64/intel/haswell
:
export EESSI_VERSION='2023.06'\ncd /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/software/linux\neessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell.tar.gz\"\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell\n
To create a tarball for specific installations, make sure you pick up both the software installation directories and the corresponding module files:
eessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell-OpenFOAM.tar.gz\"\n\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell/software/OpenFOAM modules/all//OpenFOAM\n
This tarball should be uploaded to the Stratum 0 server for ingestion. If needed, you can ask for help in the EESSI #software-layer
Slack channel
"},{"location":"software_layer/cpu_targets/","title":"CPU targets","text":"In the 2023.06 version of the EESSI repository, the following CPU microarchitectures are supported.
aarch64/generic
: fallback for Arm 64-bit CPUs (like Raspberri Pi, etc.) aarch64/neoverse_n1
: AWS Graviton 2, Ampere Altra, ... aarch64/neoverse_v1
: AWS Graviton 3 x86_64/generic
: fallback for older Intel + AMD CPUs (like Intel Sandy Bridge, ...) x86_64/amd/zen2
: AMD Rome x86_64/amd/zen3
: AMD Milan, AMD Milan X x86_64/intel/haswell
: Intel Haswell, Broadwell x86_64/intel/skylake_avx512
: Intel Skylake, Cascade Lake, Ice Lake, ...
The names of these CPU targets correspond to the names used by archspec.
"},{"location":"talks/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"AWS HPC Tech Short (~8 min.) - 15 June 2023
"},{"location":"talks/2023/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"AWS HPC Tech Short (~8 min.) - 15 June 2023
"},{"location":"talks/2023/20231027_packagingcon23_eessi/","title":"Streaming optimized scientific software installations on any Linux distro with EESSI","text":" - PackagingCon'2023 (Berlin, Germany) - 27 Oct 2023
- presented by Kenneth Hoste & Lara Peeters (HPC-UGent)
- slides (PDF)
"},{"location":"talks/2023/20231204_cvmfs_hpc/","title":"Best Practices for CernVM-FS in HPC","text":" - online tutorial (~3h15min), 4 Dec 2023
- presented by Kenneth Hoste (HPC-UGent)
- tutorial website: https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices
- slides (PDF)
"},{"location":"talks/2023/20231205_castiel2_eessi_intro/","title":"Streaming Optimised Scientific Software: an Introduction to EESSI","text":" - online tutorial (~1h40min) - 5 Dec 2023
- presented by Alan O'Cais (CECAM)
- slides (PDF)
"},{"location":"test-suite/","title":"EESSI test suite","text":"The EESSI test suite is a collection of tests that are run using ReFrame. It is used to check whether the software installations included in the EESSI software layer are working and performing as expected.
To get started, you should look into the installation and configuration guidelines first.
To write the ReFrame configuration file for your system, check ReFrame configuration file.
For which software tests are available, see available-tests.md.
For more information on using the EESSI test suite, see here.
See also release notes for the EESSI test suite.
"},{"location":"test-suite/ReFrame-configuration-file/","title":"ReFrame configuration file","text":"In order for ReFrame to run tests on your system, it needs to know some properties about your system. For example, it needs to know what kind of job scheduler you have, which partitions the system has, how to submit to those partitions, etc. All of this has to be described in a ReFrame configuration file (see also the section on $RFM_CONFIG_FILES
above).
This page is organized as follows:
- available ReFrame configuration file
- Verifying your ReFrame configuration
- How to write a ReFrame configuration file
"},{"location":"test-suite/ReFrame-configuration-file/#available-reframe-configuration-file","title":"Available ReFrame configuration file","text":"There are some available ReFrame configuration files for HPC systems and public cloud in the config directory for more inspiration. Below is a simple ReFrame configuration file with minimal changes required for getting you started on using the test suite for a CPU partition. Please check that stagedir
is set to a path on a (shared) scratch filesystem for storing (temporary) files related to the tests, and access
is set to a list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu' for submitting to a Slurm partition called cpu).
To write a ReFrame configuration file for your system, check the section How to write a ReFrame configuration file.
\"\"\"\nsimple ReFrame configuration file\n\"\"\"\nimport os\n\nfrom eessi.testsuite.common_config import common_logging_config, common_eessi_init, format_perfvars, perflog_format\nfrom eessi.testsuite.constants import * \n\nsite_configuration = {\n 'systems': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition',\n 'modules_system': 'lmod',\n 'hostnames': ['*'],\n # Note that the stagedir should be a shared directory available on all nodes running ReFrame tests\n 'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n 'partitions': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition',\n 'scheduler': 'slurm',\n 'launcher': 'mpirun',\n 'access': ['-p cpu', '--export=None'],\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'resources': [\n {\n 'name': 'memory',\n 'options': ['--mem={size}'],\n }\n ],\n 'features': [\n FEATURES[CPU]\n ] + list(SCALES.keys()),\n }\n ]\n },\n ],\n 'environments': [\n {\n 'name': 'default',\n 'cc': 'cc',\n 'cxx': '',\n 'ftn': '',\n },\n ],\n 'logging': common_logging_config(),\n 'general': [\n {\n # Enable automatic detection of CPU architecture for each partition\n # See https://reframe-hpc.readthedocs.io/en/stable/configure.html#auto-detecting-processor-information\n 'remote_detect': True,\n }\n ],\n}\n\n# optional logging to syslog\nsite_configuration['logging'][0]['handlers_perflog'].append({\n 'type': 'syslog',\n 'address': '/dev/log',\n 'level': 'info',\n 'format': f'reframe: {perflog_format}',\n 'format_perfvars': format_perfvars,\n 'append': True,\n})\n
"},{"location":"test-suite/ReFrame-configuration-file/#verifying-your-reframe-configuration","title":"Verifying your ReFrame configuration","text":"To verify the ReFrame configuration, you can query the configuration using --show-config
.
To see the full configuration, use:
reframe --show-config\n
To only show the configuration of a particular system partition, you can use the --system
option. To query a specific setting, you can pass an argument to --show-config
.
For example, to show the configuration of the gpu
partition of the example
system:
reframe --system example:gpu --show-config systems/0/partitions\n
You can drill it down further to only show the value of a particular configuration setting.
For example, to only show the launcher
value for the gpu
partition of the example
system:
reframe --system example:gpu --show-config systems/0/partitions/@gpu/launcher\n
"},{"location":"test-suite/ReFrame-configuration-file/#how-to-write-a-reframe-configuration-file","title":"How to write a ReFrame configuration file","text":"The official ReFrame documentation provides the full description on configuring ReFrame for your site. However, there are some configuration settings that are specifically required for the EESSI test suite. Also, there are a large amount of configuration settings available in ReFrame, which makes the official documentation potentially a bit overwhelming.
Here, we will describe how to create a configuration file that works with the EESSI test suite, starting from an example configuration file settings_example.py
, which defines the most common configuration settings.
"},{"location":"test-suite/ReFrame-configuration-file/#python-imports","title":"Python imports","text":"The EESSI test suite standardizes a few string-based values as constants, as well as the logging format used by ReFrame. Every ReFrame configuration file used for running the EESSI test suite should therefore start with the following import statements:
from eessi.testsuite.common_config import common_logging_config, common_eessi_init\nfrom eessi.testsuite.constants import *\n
"},{"location":"test-suite/ReFrame-configuration-file/#high-level-system-info-systems","title":"High-level system info (systems
)","text":"First, we describe the system at its highest level through the systems
keyword.
You can define multiple systems in a single configuration file (systems
is a Python list value). We recommend defining just a single system in each configuration file, as it makes the configuration file a bit easier to digest (for humans).
An example of the systems
section of the configuration file would be:
site_configuration = {\n 'systems': [\n # We could list multiple systems. Here, we just define one\n {\n 'name': 'example',\n 'descr': 'Example cluster',\n 'modules_system': 'lmod',\n 'hostnames': ['*'],\n 'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n 'partitions': [...],\n }\n ]\n}\n
The most common configuration items defined at this level are:
name
: The name of the system. Pick whatever makes sense for you. descr
: Description of the system. Again, pick whatever you like. modules_system
: The modules system used on your system. EESSI provides modules in lmod
format. There is no need to change this, unless you want to run tests from the EESSI test suite with non-EESSI modules. hostnames
: The names of the hosts on which you will run the ReFrame command, as regular expression. Using these names, ReFrame can automatically determine which of the listed configurations in the systems
list to use, which is useful if you're defining multiple systems in a single configuration file. If you follow our recommendation to limit yourself to one system per configuration file, simply define 'hostnames': ['*']
. prefix
: Prefix directory for a ReFrame run on this system. Any directories or files produced by ReFrame will use this prefix, if not specified otherwise. We recommend setting the $RFM_PREFIX
environment variable rather than specifying prefix
in your configuration file, so our common logging configuration can pick up on it (see also $RFM_PREFIX
). stagedir
: A shared directory that is available on all nodes that will execute ReFrame tests. This is used for storing (temporary) files related to the test. Typically, you want to set this to a path on a (shared) scratch filesystem. Defining this is optional: the default is a 'stage
' directory inside the prefix
directory. partitions
: Details on system partitions, see below.
"},{"location":"test-suite/ReFrame-configuration-file/#partitions","title":"System partitions (systems.partitions
)","text":"The next step is to add the system partitions to the configuration files, which is also specified as a Python list since a system can have multiple partitions.
The partitions
section of the configuration for a system with two Slurm partitions (one CPU partition, and one GPU partition) could for example look something like this:
site_configuration = {\n 'systems': [\n {\n ...\n 'partitions': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition'\n 'scheduler': 'slurm',\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'launcher': 'mpirun',\n 'access': ['-p cpu'],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'features': [\n FEATURES[CPU]\n ] + list(SCALES.keys()),\n },\n {\n 'name': 'gpu_partition',\n 'descr': 'GPU partition'\n 'scheduler': 'slurm',\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'launcher': 'mpirun',\n 'access': ['-p gpu'],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'resources': [\n {\n 'name': '_rfm_gpu',\n 'options': ['--gpus-per-node={num_gpus_per_node}'],\n }\n ],\n 'devices': [\n {\n 'type': DEVICE_TYPES[GPU],\n 'num_devices': 4,\n }\n ],\n 'features': [\n FEATURES[CPU],\n FEATURES[GPU],\n ],\n 'extras': {\n GPU_VENDOR: GPU_VENDORS[NVIDIA],\n },\n },\n ]\n }\n ]\n}\n
The most common configuration items defined at this level are:
name
: The name of the partition. Pick anything you like. descr
: Description of the partition. Again, pick whatever you like. scheduler
: The scheduler used to submit to this partition, for example slurm
. All valid options can be found in the ReFrame documentation. launcher
: The parallel launcher used on this partition, for example mpirun
or srun
. All valid options can be found in the ReFrame documentation. access
: A list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu
' for submitting to a Slurm partition called cpu
). If supported by your scheduler, we recommend to not export the submission environment (for example by using '--export=None
' with Slurm). This avoids test failures due to environment variables set in the submission environment that are passed down to submitted jobs. prepare_cmds
: Commands to execute at the start of every job that runs a test. If your batch scheduler does not export the environment of the submit host, this is typically where you can initialize the EESSI environment. environs
: The names of the programming environments (to be defined later in the configuration file via environments
) that may be used on this partition. A programming environment is required for tests that are compiled first, before they can run. The EESSI test suite however only tests existing software installations, so no compilation (or specific programming environment) is needed. Simply specify 'environs': ['default']
, since ReFrame requires that a default environment is defined. max_jobs
: The maximum amount of jobs ReFrame is allowed to submit in parallel. Some batch systems limit how many jobs users are allowed to have in the queue. You can use this to make sure ReFrame doesn't exceed that limit. resources
: This field defines how additional resources can be requested in a batch job. Specifically, on a GPU partition, you have to define a resource with the name '_rfm_gpu
'. The options
field should then contain the argument to be passed to the batch scheduler in order to request a certain number of GPUs per node, which could be different for different batch schedulers. For example, when using Slurm you would specify: 'resources': [\n {\n 'name': '_rfm_gpu',\n 'options': ['--gpus-per-node={num_gpus_per_node}'],\n },\n],\n
processor
: We recommend to NOT define this field, unless CPU autodetection is not working for you. The EESSI test suite relies on information about your processor topology to run. Using CPU autodetection is the easiest way to ensure that all processor-related information needed by the EESSI test suite are defined. Only if CPU autodetection is failing for you do we advice you to set the processor
in the partition configuration as an alternative. Although additional fields might be used by future EESSI tests, at this point you'll have to specify at least the following fields: 'processor': {\n 'num_cpus': 64, # Total number of CPU cores in a node\n 'num_sockets': 2, # Number of sockets in a node\n 'num_cpus_per_socket': 32, # Number of CPU cores per socket\n 'num_cpus_per_core': 1, # Number of hardware threads per CPU core\n} \n
features
: The features
field is used by the EESSI test suite to run tests only on a partition if it supports a certain feature (for example if GPUs are available). Feature names are standardized in the EESSI test suite in eessi.testsuite.constants.FEATURES
dictionary. Typically, you want to define features: [FEATURES[CPU]] + list(SCALES.keys())
for CPU based partitions, and features: [FEATURES[GPU]] + list(SCALES.keys())
for GPU based partitions. The first tells the EESSI test suite that this partition can only run CPU-based tests, whereas second indicates that this partition can only run GPU-based tests. You can define a single partition to have both the CPU and GPU features (since features
is a Python list). However, since the CPU-based tests will not ask your batch scheduler for GPU resources, this may fail on batch systems that force you to ask for at least one GPU on GPU-based nodes. Also, running CPU-only code on a GPU node is typically considered bad practice, thus testing its functionality is typically not relevant. The list(SCALES.keys())
adds all the scales that may be used by EESSI tests to the features
list. These scales are defined in eessi.testsuite.constants.SCALES
and define at which scales tests should be run, e.g. single core, half a node, a full node, two nodes, etc. This can be used to exclude running at certain scales on systems that would not support it. E.g. some systems might not support requesting multiple partial nodes, which is what the 1_cpn_2_nodes
(1 core per node, on two nodes) and 1_cpn_4_nodes
scales do. One could exclude these by setting e.g. features: [FEATURES[CPU]] + [s for s in SCALES if s not in ['1_cpn_2_nodes', '1_cpn_4_nodes']]
. With this configuration setting, ReFrame will run all the scales listed in `eessi.testsuite.constants.SCALES except those two. In a similar way, one could exclude all multinode tests if one just has a single node available. devices
: This field specifies information on devices (for example) present in the partition. Device types are standardized in the EESSI test suite in the eessi.testsuite.constants.DEVICE_TYPES
dictionary. This is used by the EESSI test suite to determine how many of these devices it can/should use per node. Typically, there is no need to define devices
for CPU partitions. For GPU partitions, you want to define something like: 'devices': {\n 'type': DEVICE_TYPES[GPU],\n 'num_devices': 4, # or however many GPUs you have per node\n}\n
extras
: This field specifies extra information on the partition, such as the GPU vendor. Valid fields for extras
are standardized as constants in eessi.testsuite.constants
(for example GPU_VENDOR
). This is used by the EESSI test suite to decide if a partition can run a test that specifically requires a certain brand of GPU. Typically, there is no need to define extras
for CPU partitions. For GPU partitions, you typically want to specify the GPU vendor, for example: 'extras': {\n GPU_VENDOR: GPU_VENDORS[NVIDIA]\n}\n
Note that as more tests are added to the EESSI test suite, the use of features
, devices
and extras
by the EESSI test suite may be extended, which may require an update of your configuration file to define newly recognized fields.
Note
Keep in mind that ReFrame partitions are virtual entities: they may or may not correspond to a partition as it is configured in your batch system. One might for example have a single partition in the batch system, but configure it as two separate partitions in the ReFrame configuration file based on additional constraints that are passed to the scheduler, see for example the AWS CitC example configuration.
The EESSI test suite (and more generally, ReFrame) assumes the hardware within a partition defined in the ReFrame configuration file is homogeneous.
"},{"location":"test-suite/ReFrame-configuration-file/#environments","title":"Environments","text":"ReFrame needs a programming environment to be defined in its configuration file for tests that need to be compiled before they are run. While we don't have such tests in the EESSI test suite, ReFrame requires some programming environment to be defined:
site_configuration = {\n ...\n 'environments': [\n {\n 'name': 'default', # Note: needs to match whatever we set for 'environs' in the partition\n 'cc': 'cc',\n 'cxx': '',\n 'ftn': '',\n }\n ]\n}\n
Note
The name
here needs to match whatever we specified for the environs
property of the partitions.
"},{"location":"test-suite/ReFrame-configuration-file/#logging","title":"Logging","text":"ReFrame allows a large degree of control over what gets logged, and where. For convenience, we have created a common logging configuration in eessi.testsuite.common_config
that provides a reasonable default. It can be used by importing common_logging_config
and calling it as a function to define the 'logging
setting:
from eessi.testsuite.common_config import common_logging_config\n\nsite_configuration = {\n ...\n 'logging': common_logging_config(),\n}\n
When combined by setting the $RFM_PREFIX
environment variable, the output, performance log, and regular ReFrame logs will all end up in the directory specified by $RFM_PREFIX
, which we recommend doing. Alternatively, a prefix can be passed as an argument like common_logging_config(prefix)
, which will control where the regular ReFrame log ends up. Note that the performance logs do not respect this prefix: they will still end up in the standard ReFrame prefix (by default the current directory, unless otherwise set with $RFM_PREFIX
or --prefix
).
"},{"location":"test-suite/ReFrame-configuration-file/#cpu-auto-detection","title":"Auto-detection of processor information","text":"You can let ReFrame auto-detect the processor information for your system.
ReFrame will automatically use auto-detection when two conditions are met:
- The
partitions
section of you configuration file does not specify processor
information for a particular partition (as per our recommendation in the previous section); - The
remote_detect
option is enabled in the general
part of the configuration, as follows: site_configuration = {\n 'systems': ...\n 'logging': ...\n 'general': [\n {\n 'remote_detect': True,\n }\n ]\n}\n
To trigger the auto-detection of processor information, it is sufficient to let ReFrame list the available tests:
reframe --list\n
ReFrame will store the processor information for your system in ~/.reframe/topology/<system>-<partition>/processor.json
.
"},{"location":"test-suite/available-tests/","title":"Available tests","text":"The EESSI test suite currently includes tests for:
- GROMACS
- TensorFlow
- OSU Micro-Benchmarks
For a complete overview of all available tests in the EESSI test suite, see the eessi/testsuite/tests
subdirectory in the EESSI/test-suite
GitHub repository.
"},{"location":"test-suite/available-tests/#gromacs","title":"GROMACS","text":"Several tests for GROMACS, a software package to perform molecular dynamics simulations, are included, which use the systems included in the HECBioSim benchmark suite:
Crambin
(20K atom system) Glutamine-Binding-Protein
(61K atom system) hEGFRDimer
(465K atom system) hEGFRDimerSmallerPL
(465K atom system, only 10k steps) hEGFRDimerPair
(1.4M atom system) hEGFRtetramerPair
(3M atom system)
It is implemented in tests/apps/gromacs.py
, on top of the GROMACS test that is included in the ReFrame test library hpctestlib
.
To run this GROMACS test with all HECBioSim systems, use:
reframe --run --name GROMACS\n
To run this GROMACS test only for a specific HECBioSim system, use for example:
reframe --run --name 'GROMACS.*HECBioSim/hEGFRDimerPair'\n
To run this GROMACS test with the smallest HECBioSim system (Crambin
), you can use the CI
tag:
reframe --run --name GROMACS --tag CI\n
"},{"location":"test-suite/available-tests/#tensorflow","title":"TensorFlow","text":"A test for TensorFlow, a machine learning framework, is included, which is based on the \"Multi-worker training with Keras\" TensorFlow tutorial.
It is implemented in tests/apps/tensorflow/
.
To run this TensorFlow test, use:
reframe --run --name TensorFlow\n
Warning
This test requires TensorFlow v2.11 or newer, using an older TensorFlow version will not work!
"},{"location":"test-suite/available-tests/#osumicrobenchmarks","title":"OSU Micro-Benchmarks","text":"A test for OSU Micro-Benchmarks, which provides an MPI benchmark.
It is implemented in tests/apps/osu.py
.
To run this Osu Micro-Benchmark, use:
reframe --run --name OSU-Micro-Benchmarks\n
Warning
This test requires OSU Micro-Benchmarks v5.9 or newer, using an older OSU -Micro-Benchmark version will not work!
"},{"location":"test-suite/installation-configuration/","title":"Installing and configuring the EESSI test suite","text":"This page covers the requirements, installation and configuration of the EESSI test suite.
"},{"location":"test-suite/installation-configuration/#requirements","title":"Requirements","text":"The EESSI test suite requires
- Python >= 3.6
- ReFrame v4.3.3 (or newer)
- ReFrame test library (
hpctestlib
)
"},{"location":"test-suite/installation-configuration/#installing-reframe","title":"Installing Reframe","text":"General instructions for installing ReFrame are available in the ReFrame documentation. To check if ReFrame is available, run the reframe
command:
reframe --version\n
(for more details on the ReFrame version requirement, click here) Two important bugs were resolved in ReFrame's CPU autodetect functionality in version 4.3.3.
We strongly recommend you use ReFrame >= 4.3.3
.
If you are using an older version of ReFrame, you may encounter some issues:
- ReFrame will try to use the parallel launcher command configured for each partition (e.g.
mpirun
) when doing the remote autodetect. If there is no system-version of mpirun
available, that will fail (see ReFrame issue #2926). - CPU autodetection only worked when using a clone of the ReFrame repository, not when it was installed with
pip
or EasyBuild
(as is also the case for the ReFrame shipped with EESSI) (see ReFrame issue #2914).
"},{"location":"test-suite/installation-configuration/#installing-reframe-test-library-hpctestlib","title":"Installing ReFrame test library (hpctestlib
)","text":"The EESSI test suite requires that the ReFrame test library (hpctestlib
) is available, which is currently not included in a standard installation of ReFrame.
We recommend installing ReFrame using EasyBuild (version 4.8.1, or newer), or using a ReFrame installation that is available in the EESSI repository (version 2023.06, or newer).
For example (using EESSI):
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ReFrame/4.3.3\n
To check whether the ReFrame test library is available, try importing a submodule of the hpctestlib
Python package:
python3 -c 'import hpctestlib.sciapps.gromacs'\n
"},{"location":"test-suite/installation-configuration/#installation","title":"Installation","text":"To install the EESSI test suite, you can either use pip
or clone the GitHub repository directly:
"},{"location":"test-suite/installation-configuration/#pip-install","title":"Using pip
","text":"pip install git+https://github.com/EESSI/test-suite.git\n
"},{"location":"test-suite/installation-configuration/#cloning-the-repository","title":"Cloning the repository","text":"git clone https://github.com/EESSI/test-suite $HOME/EESSI-test-suite\ncd EESSI-test-suite\nexport PYTHONPATH=$PWD:$PYTHONPATH\n
"},{"location":"test-suite/installation-configuration/#verify-installation","title":"Verify installation","text":"To check whether the EESSI test suite installed correctly, try importing the eessi.testsuite
Python package:
python3 -c 'import eessi.testsuite'\n
"},{"location":"test-suite/installation-configuration/#configuration","title":"Configuration","text":"Before you can run the EESSI test suite, you need to create a configuration file for ReFrame that is specific to the system on which the tests will be run.
Example configuration files are available in the config
subdirectory of the EESSI/test-suite
GitHub repository](https://github.com/EESSI/test-suite/tree/main/config), which you can use as a template to create your own.
"},{"location":"test-suite/installation-configuration/#configuring-reframe-environment-variables","title":"Configuring ReFrame environment variables","text":"We recommend setting a couple of $RFM_*
environment variables to configure ReFrame, to avoid needing to include particular options to the reframe
command over and over again.
"},{"location":"test-suite/installation-configuration/#RFM_CONFIG_FILES","title":"ReFrame configuration file ($RFM_CONFIG_FILES
)","text":"(see also RFM_CONFIG_FILES
in ReFrame docs)
Define the $RFM_CONFIG_FILES
environment variable to instruct ReFrame which configuration file to use, for example:
export RFM_CONFIG_FILES=$HOME/EESSI-test-suite/config/example.py\n
Alternatively, you can use the --config-file
(or -C
) reframe
option.
See the section on the ReFrame configuration file below for more information.
"},{"location":"test-suite/installation-configuration/#search-path-for-tests-rfm_check_search_path","title":"Search path for tests ($RFM_CHECK_SEARCH_PATH
)","text":"(see also RFM_CHECK_SEARCH_PATH
in ReFrame docs)
Define the $RFM_CHECK_SEARCH_PATH
environment variable to tell ReFrame which directory to search for tests.
In addition, define $RFM_CHECK_SEARCH_RECURSIVE
to ensure that ReFrame searches $RFM_CHECK_SEARCH_PATH
recursively (i.e. so that also tests in subdirectories are found).
For example:
export RFM_CHECK_SEARCH_PATH=$HOME/EESSI-test-suite/eessi/testsuite/tests\nexport RFM_CHECK_SEARCH_RECURSIVE=1\n
Alternatively, you can use the --checkpath
(or -c
) and --recursive
(or -R
) reframe
options.
"},{"location":"test-suite/installation-configuration/#RFM_PREFIX","title":"ReFrame prefix ($RFM_PREFIX
)","text":"(see also RFM_PREFIX
in ReFrame docs)
Define the $RFM_PREFIX
environment variable to tell ReFrame where to store the files it produces. E.g.
export RFM_PREFIX=$HOME/reframe_runs\n
This involves:
- test output directories (which contain e.g. the job script, stderr and stdout for each of the test jobs)
- staging directories (unless otherwise specified by
staging
, see below); - performance logs;
Note that the default is for ReFrame to use the current directory as prefix. We recommend setting a prefix so that logs are not scattered around and nicely appended for each run.
If our common logging configuration is used, the regular ReFrame log file will also end up in the location specified by $RFM_PREFIX
.
Warning
Using the --prefix
option in your reframe
command is not equivalent to setting $RFM_PREFIX
, since our common logging configuration only picks up on the $RFM_PREFIX
environment variable to determine the location for the ReFrame log file.
"},{"location":"test-suite/release-notes/","title":"Release notes for EESSI test suite","text":""},{"location":"test-suite/release-notes/#020-7-march-2024","title":"0.2.0 (7 march 2024)","text":"This is a minor release of the EESSI test-suite
It includes:
- Implement the CI for regular runs on a system (#93)
- Add OSU tests and update the hooks and configs to make the tests portable (#54, #95, #96, #97, #110, #116, #117, #118, #121)
- Add extra scales to filter tests(#94)
- add new hook to filter out invalid scales based on features in the config (#111)
- unify test names (#108)
- updates to CI workflow ((#102, #103, #104, #105)
- Update common_config (#114)
- Add common config item to redirect the report file to the same directory as e.g. the perflog (#122)
- Fix code formatting + enforce it in CI workflow (#120)
Bug fixes:
- Fix hook _assign_num_tasks_per_node (#98)
- fix import common-config vsc_hortense (#99)
- fix typo in partition names in configuration file for vsc_hortense (#106)
"},{"location":"test-suite/release-notes/#010-5-october-2023","title":"0.1.0 (5 October 2023)","text":"Version 0.1.0 is the first release of the EESSI test suite.
It includes:
- A well-structured
eessi.testsuite
Python package that provides constants, utilities, hooks, and tests, which can be installed with \"pip install
\". - Tests for GROMACS and TensorFlow in
eessi.testsuite.tests.apps
that leverage the functionality provided by eessi.testsuite.*
. - Examples of ReFrame configuration files for various systems in the
config
subdirectory. - A
common_logging_config()
function to facilitate the ReFrame logging configuration. - A set of standard device types and features that can be used in the
partitions
section of the ReFrame configuration file. - A set of tags (
CI
+ scale
) that can be used to filter checks. - Scripts that show how to run the test suite.
"},{"location":"test-suite/usage/","title":"Using the EESSI test suite","text":"This page covers the usage of the EESSI test suite.
We assume you have already installed and configured the EESSI test suite on your system.
"},{"location":"test-suite/usage/#listing-available-tests","title":"Listing available tests","text":"To list the tests that are available in the EESSI test suite, use reframe --list
(or reframe -L
for short).
If you have properly configured ReFrame, you should see a (potentially long) list of checks in the output:
$ reframe --list\n...\n[List of matched checks]\n- ...\nFound 123 check(s)\n
Note
When using --list
, checks are only generated based on modules that are available in the system where the reframe
command is invoked.
The system partitions specified in your ReFrame configuration file are not taken into account when using --list
.
So, if --list
produces an overview of 50 checks, and you have 4 system partitions in your configuration file, actually running the test suite may result in (up to) 200 checks being executed.
"},{"location":"test-suite/usage/#dry-run","title":"Performing a dry run","text":"To perform a dry run of the EESSI test suite, use reframe --dry-run
:
$ reframe --dry-run\n...\n[==========] Running 1234 check(s)\n\n[----------] start processing checks\n[ DRY ] GROMACS_EESSI ...\n...\n[----------] all spawned checks have finished\n\n[ PASSED ] Ran 1234/1234 test case(s) from 1234 check(s) (0 failure(s), 0 skipped, 0 aborted)\n
Note
When using --dry-run
, the systems partitions listed in your ReFrame configuration file are also taken into account when generating checks, next to available modules and test parameters, which is not the case when using --list
.
"},{"location":"test-suite/usage/#running-the-full-test-suite","title":"Running the (full) test suite","text":"To actually run the (full) EESSI test suite and let ReFrame produce a performance report, use reframe --run --performance-report
.
We strongly recommend filtering the checks that will be run by using additional options like --system
, --name
, --tag
(see the 'Filtering tests' section below), and doing a dry run first to make sure that the generated checks correspond to what you have in mind.
"},{"location":"test-suite/usage/#reframe-output-and-log-files","title":"ReFrame output and log files","text":"ReFrame will generate various output and log files:
- a general ReFrame log file with debug logging on the ReFrame run (incl. selection of tests, generating checks, test results, etc.);
- stage directories for each generated check, in which the checks are run;
- output directories for each generated check, which include the test output;
- performance log files for each test, which include performance results for the test runs;
We strongly recommend controlling where these files go by using the common logging configuration that is provided by the EESSI test suite in your ReFrame configuration file and setting $RFM_PREFIX
(avoid using the cmd line option --prefix
).
If you do, and if you use ReFrame v4.3.3 or more newer, you should find the output and log files at:
- general ReFrame log file at
$RFM_PREFIX/logs/reframe_<datestamp>_<timestamp>.log
; - stage directories in
$RFM_PREFIX/stage/<system>/<partition>/<environment>/
; - output directories in
$RFM_PREFIX/output/<system>/<partition>/<environment>/
; - performance log files in
$RFM_PREFIX/perflogs/<system>/<partition>/<environment>/
;
In the stage and output directories, there will be a subdirectory for each check that was run, which are tagged with a unique hash (like d3adb33f
) that is determined based on the specific parameters for that check (see the ReFrame documentation for more details on the test naming scheme).
"},{"location":"test-suite/usage/#filtering-tests","title":"Filtering tests","text":"By default, ReFrame will automatically generate checks for each system partition, based on the tests available in the EESSI test suite, available software modules, and tags defined in the EESSI test suite.
To avoid being overwhelmed by checks, it is recommend to apply filters so ReFrame only generates the checks you are interested in.
"},{"location":"test-suite/usage/#filter-name","title":"Filtering by test name","text":"You can filter checks based on the full test name using the --name
option (or -n
), which includes the value for all test parameters.
Here's an example of a full test name:
GROMACS_EESSI %benchmark_info=HECBioSim/Crambin %nb_impl=cpu %scale=1_node %module_name=GROMACS/2023.1-foss-2022a /d3adb33f @example:gpu+default\n
To let ReFrame only generate checks for GROMACS, you can use:
reframe --name GROMACS\n
To only run GROMACS checks with a particular version of GROMACS, you can use --name
to only retain specific GROMACS
modules:
reframe --name %module_name=GROMACS/2023.1\n
Likewise, you can filter on any part of the test name.
You can also select one specific check using the corresponding test hash, which is also part of the full test name (see /d3adb33f
in the example above): for example:
reframe --name /d3adb33f\n
The argument passed to --name
is interpreted as a Python regular expression, so you can use wildcards like .*
, character ranges like [0-9]
, use ^
to specify that the pattern should match from the start of the test name, etc.
Use --list
or --dry-run
to check the impact of using the --name
option.
"},{"location":"test-suite/usage/#filter-system-partition","title":"Filtering by system (partition)","text":"By default, ReFrame will generate checks for each system partition that is listed in your configuration file.
To only let ReFrame checks for a particular system or system partition, you can use the --system
option.
For example:
- To let ReFrame only generate checks for the system named
example
, use: reframe --system example ...\n
- To let ReFrame only generate checks for the
gpu
partition of the system named example
, use: reframe --system example:gpu ...\n
Use --dry-run
to check the impact of using the --system
option.
"},{"location":"test-suite/usage/#filter-tag","title":"Filtering by tags","text":"To filter tests using one or more tags, you can use the --tag
option.
Using --list-tags
you can get a list of known tags.
To check the impact of this on generated checks by ReFrame, use --list
or --dry-run
.
"},{"location":"test-suite/usage/#ci-tag","title":"CI
tag","text":"For each software that is included in the EESSI test suite, a small test is tagged with CI
to indicate it can be used in a Continuous Integration (CI) environment.
Hence, you can use this tag to let ReFrame only generate checks for small test cases:
reframe --tag CI\n
For example:
$ reframe --name GROMACS --tag CI\n...\n
"},{"location":"test-suite/usage/#scale-tags","title":"scale
tags","text":"The EESSI test suite defines a set of custom tags that control the scale of checks, which specify many cores/GPUs/nodes should be used for running a check. The number of cores and GPUs serves as an upper limit; the actual count depends on the specific configuration of cores, GPUs, and sockets within the node, as well as the specific test being carried out.
tag name description 1_core
using 1 CPU core 1 GPU 2_cores
using 2 CPU cores and 1 GPU 4_cores
using 4 CPU cores and 1 GPU 1_cpn_2_nodes
using 1 CPU core per node, 1 GPU per node, and 2 nodes 1_cpn_4_nodes
using 1 CPU core per node, 1 GPU per node, and 4 nodes 1_8_node
using 1/8th of a node (12.5% of available cores/GPUs, 1 at minimum) 1_4_node
using a quarter of a node (25% of available cores/GPUs, 1 at minimum) 1_2_node
using half of a node (50% of available cores/GPUs, 1 at minimum) 1_node
using a full node (all available cores/GPUs) 2_nodes
using 2 full nodes 4_nodes
using 4 full nodes 8_nodes
using 8 full nodes 16_nodes
using 16 full nodes"},{"location":"test-suite/usage/#using-multiple-tags","title":"Using multiple tags","text":"To filter tests using multiple tags, you can:
- use
|
as separator to indicate that one of the specified tags must match (logical OR, for example --tag='1_core|2_cores'
); - use the
--tag
option multiple times to indicate that all specified tags must match (logical AND, for example --tag CI --tag 1_core
);
"},{"location":"test-suite/usage/#example-commands","title":"Example commands","text":"Running all GROMACS tests on 4 cores on the cpu
partition
reframe --run --system example:cpu --name GROMACS --tag 4_cores --performance-report\n
List all checks for TensorFlow 2.11 using a single node
reframe --list --name %module_name=TensorFlow/2.11 --tag 1_node\n
Dry run of TensorFlow CI checks on a quarter (1/4) of a node (on all system partitions)
reframe --dry-run --name 'TensorFlow.*CUDA' --tag 1_4_node --tag CI\n
"},{"location":"test-suite/usage/#overriding-test-parameters-advanced","title":"Overriding test parameters (advanced)","text":"You can override test parameters using the --setvar
option (or -S
).
This can be done either globally (for all tests), or only for specific tests (which is recommended when using --setvar
).
For example, to run all GROMACS checks with a specific GROMACS module, you can use:
reframe --setvar GROMACS_EESSI.modules=GROMACS/2023.1-foss-2022a ...\n
Warning
We do not recommend using --setvar
, since it is quite easy to make unintended changes to test parameters this way that can result in broken checks.
You should try filtering tests using the --name
or --tag
options instead.
"},{"location":"using_eessi/basic_commands/","title":"Basic commands","text":""},{"location":"using_eessi/basic_commands/#basic-commands-to-access-software-provided-via-eessi","title":"Basic commands to access software provided via EESSI","text":"EESSI provides software through environment module files and Lmod.
To see which modules (and extensions) are available, run:
module avail\n
Below is a short excerpt of the output produced by module avail
, showing 10 modules only.
PyYAML/5.3-GCCcore-9.3.0\n Qt5/5.14.1-GCCcore-9.3.0\n Qt5/5.15.2-GCCcore-10.3.0 (D)\n QuantumESPRESSO/6.6-foss-2020a\n R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n R/4.0.0-foss-2020a\n R/4.1.0-foss-2021a (D)\n re2c/1.3-GCCcore-9.3.0\n re2c/2.1.1-GCCcore-10.3.0 (D)\n RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n
Load modules with module load package/version
, e.g., module load R/4.1.0-foss-2021a
, and try out the software. See below for a short session
[EESSI 2023.06] $ module load R/4.1.0-foss-2021a\n[EESSI 2021.06] $ which R\n/cvmfs/software.eessi.io/versions/2021.12/software/linux/x86_64/intel/skylake_avx512/software/R/4.1.0-foss-2021a/bin/R\n[EESSI 2023.06] $ R --version\nR version 4.1.0 (2021-05-18) -- \"Camp Pontanezen\"\nCopyright (C) 2021 The R Foundation for Statistical Computing\nPlatform: x86_64-pc-linux-gnu (64-bit)\n\nR is free software and comes with ABSOLUTELY NO WARRANTY.\nYou are welcome to redistribute it under the terms of the\nGNU General Public License versions 2 or 3.\nFor more information about these matters see\nhttps://www.gnu.org/licenses/.\n
"},{"location":"using_eessi/building_on_eessi/","title":"Building software on top of EESSI","text":""},{"location":"using_eessi/building_on_eessi/#building-software-on-top-of-eessi-with-easybuild","title":"Building software on top of EESSI with EasyBuild","text":"Building on top of EESSI with EasyBuild is relatively straightforward. One crucial feature is that EasyBuild supports building against operating system libraries that are not in a standard prefix (such as /usr/lib
). This is required when building against EESSI, since all of the software in EESSI is built against the compatibility layer.
"},{"location":"using_eessi/building_on_eessi/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"Start your environment as described here
"},{"location":"using_eessi/building_on_eessi/#configure-easybuild","title":"Configure EasyBuild","text":"To configure EasyBuild, first, check out the EESSI software-layer repository. We advise you to check out the branch corresponding to the version of EESSI you would like to use.
If you are unsure which version you are using, you can run
echo ${EESSI_VERSION}\n
to check it. To build on top of e.g. version 2023.06
of the EESSI software stack, we check it out, and go into that directory:
git clone https://github.com/EESSI/software-layer/ --branch 2023.06\ncd software-layer\n
Then, you have to pick a working directory (that you have write access to) where EasyBuild can do the build, and an install directory (with sufficient storage space), where EasyBuild can install it. In this example, we create a temporary directory in /tmp/
as our working directory, and use $HOME/.local/easybuild
as our installpath: export WORKDIR=$(mktemp --directory --tmpdir=/tmp -t eessi-build.XXXXXXXXXX)\nsource configure_easybuild\nexport EASYBUILD_INSTALLPATH=\"${HOME}/.local/easybuild\"\n
Next, you load the EasyBuild module that you want to use, e.g. module load EasyBuild/4.8.2\n
Finally, you can check the current configuration for EasyBuild using eb --show-config\n
Note
We use EasyBuild's default behaviour in optimizing for the host architecture. Since the EESSI initialization script also loads the EESSI stack that is optimized for your host architecture, this matches nicely. However, if you work on a cluster with heterogeneous node types, you have to realize you can only use these builds on the same architecture as where you build them. You can use different EASYBUILD_INSTALLPATH
s if you want to build for different host architectures. For example, when you are on a system that has a mix of AMD zen3
and AMD zen4
nodes, you might want to use EASYBUILD_INSTALLPATH=$HOME/.local/easybuild/zen3
when building on a zen3
node, EASYBUILD_INSTALLPATH=$HOME/.local/easybuild/zen4
when building on a zen4
node. Then, in the step beloww, instead of the module use
command listed there, you can use module use $HOME/.local/easybuild/zen3/modules/all
when you want to run on a zen3
node and module use $HOME/.local/easybuild/zen4/modules/all
when you want to run on a zen4
node.
"},{"location":"using_eessi/building_on_eessi/#building","title":"Building","text":"Now, you are ready to build. For example, at the time of writing, netCDF-4.9.0-gompi-2022a.eb
was not in the EESSI environment yet, so you can build it yourself:
eb netCDF-4.9.0-gompi-2022a.eb\n
Note
If this netCDF module is available by the time you are trying, you can force a local rebuild by adding the --rebuild
argument in order to experiment with building locally, or pick a different EasyConfig to build.
"},{"location":"using_eessi/building_on_eessi/#using-the-newly-built-module","title":"Using the newly built module","text":"First, you'll need to add the subdirectory of the EASYBUILD_INSTALLPATH
that contains the modules to the MODULEPATH
. You can do that using:
module use ${EASYBUILD_INSTALLPATH}/modules/all\n
you may want to do this as part of your .bashrc
.
Note
Be careful adding to the MODULEPATH
in your .bashrc
if you are on a cluster with heterogeneous architectures. You don't want to pick up on a module that was not compiled for the correct architectures accidentally.
Since your module is built on top of the EESSI environment, that needs to be loaded first (as described here), if you haven't already done so.
Finally, you should be able to load our newly build module:
module load netCDF/4.9.0-gompi-2022a\n
"},{"location":"using_eessi/building_on_eessi/#manually-building-software-op-top-of-eessi","title":"Manually building software op top of EESSI","text":"Building software on top of EESSI would require your linker to use the same system-dependencies as the software in EESSI does. In other words: it requires you to link against libraries from the compatibility layer, instead of from your host OS.
While we plan to support this in the future, manually building on top of EESSI is currently not supported yet in a trivial way.
"},{"location":"using_eessi/eessi_demos/","title":"Running EESSI demos","text":"To really experience how using EESSI can significantly facilitate the work of researchers, we recommend running one or more of the EESSI demos.
First, clone the eessi-demo
Git repository, and move into the resulting directory:
git clone https://github.com/EESSI/eessi-demo.git\ncd eessi-demo\n
The contents of the directory should be something like this:
$ ls -l\ntotal 48\ndrwxrwxr-x 2 example users 4096 May 15 13:26 Bioconductor\ndrwxrwxr-x 2 example users 4096 May 15 13:26 ESPResSo\ndrwxrwxr-x 2 example users 4096 May 15 13:26 GROMACS\n-rw-rw-r-- 1 example users 18092 Dec 5 2022 LICENSE\ndrwxrwxr-x 2 example users 4096 May 15 13:26 OpenFOAM\n-rw-rw-r-- 1 example users 543 May 15 13:26 README.md\ndrwxrwxr-x 3 example users 4096 May 15 13:26 scripts\ndrwxrwxr-x 2 example users 4096 May 15 13:26 TensorFlow\n
The directories we care about are those that correspond to particular scientific software, like Bioconductor
, GROMACS
, OpenFOAM
, TensorFlow
, ...
Each of these contains a run.sh
script that can be used to start a small example run with that software. Every example takes a couple of minutes to run, even with limited resources only.
"},{"location":"using_eessi/eessi_demos/#example-running-tensorflow","title":"Example: running TensorFlow","text":"Let's try running the TensorFlow example.
First, we need to make sure that our environment is set up to use EESSI:
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n
Change to the TensorFlow
subdirectory of the eessi-demo
Git repository, and execute the run.sh
script:
[EESSI 2023.06] $ cd TensorFlow\n[EESSI 2023.06] $ ./run.sh\n
Shortly after starting the script you should see output as shown below, which indicates that GROMACS has started running:
Epoch 1/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.2983 - accuracy: 0.9140\nEpoch 2/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.1444 - accuracy: 0.9563\nEpoch 3/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.1078 - accuracy: 0.9670\nEpoch 4/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.0890 - accuracy: 0.9717\nEpoch 5/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.0732 - accuracy: 0.9772\n313/313 - 0s - loss: 0.0679 - accuracy: 0.9790 - 391ms/epoch - 1ms/step\n\nreal 1m24.645s\nuser 0m16.467s\nsys 0m0.910s\n
"},{"location":"using_eessi/setting_up_environment/","title":"Setting up your environment","text":"To set up the EESSI environment, simply run the command:
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n
This may take a while as data is downloaded from a Stratum 1 server which is part of the CernVM-FS infrastructure to distribute files. You should see the following output:
Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\narchdetect says x86_64/amd/zen2\nUsing x86_64/amd/zen2 as software subdirectory.\nUsing /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} [user@system ~]$ # (2)!\n
- What is reported here depends on the CPU architecture of the machine you are running the
source
command. - This is the prompt indicating that you have access to the EESSI software stack.
The last line is the shell prompt.
Your environment is now set up, you are ready to start running software provided by EESSI!
"},{"location":"blog/archive/2024/","title":"2024","text":""}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to the EESSI project documentation!","text":"Quote
What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?
The European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is a collaboration between different European partners in HPC community. The goal of this project is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure.
"},{"location":"#quick-links","title":"Quick links","text":" - What is EESSI?
- Contact info
For users:
software.eessi.io
repository - Access, initialize and use EESSI
- Overview of software
- How to run EESSI test suite
- Get help or report issue
For system administrators:
- EESSI layered structure: filesystem, compatibility, software
- Installing EESSI
- Setting up a mirror server
For contributors:
- Adding software to EESSI
- Meetings
The EESSI project was covered during a quick AWS HPC Tech Short video (15 June 2023):
"},{"location":"bot/","title":"Build-test-deploy bot","text":"Building, testing, and deploying software is done by one or more bot instances.
The EESSI build-test-deploy bot is implemented as a GitHub App in the eessi-bot-software-layer
repository.
It operates in the context of pull requests to the compatibility-layer
repository or the software-layer
repository, and follows the instructions supplied by humans, so the procedure of adding software to EESSI is semi-automatic.
It leverages the scripts provided in the bot/
subdirectory of the target repository (see for example here), like bot/build.sh
to build software, and bot/check-result.sh
to check whether the software was built correctly.
"},{"location":"bot/#high-level-design","title":"High-level design","text":"The bot consists of two components: the event handler, and the job manager.
"},{"location":"bot/#event-handler","title":"Event handler","text":"The bot event handler is responsible for handling GitHub events for the GitHub repositories it is registered to.
It is triggered for every event that it receives from GitHub. Most events are ignored, but specific events trigger the bot to take action.
Examples of actionable events are submitting of a comment that starts with bot:
, which may specify an instruction for the bot like building software, or adding a bot:deploy
label (see deploying).
"},{"location":"bot/#job-manager","title":"Job manager","text":"The bot job manager is responsible for monitoring the queued and running jobs, and reporting back when jobs completed.
It runs every couple of minutes as a cron job.
"},{"location":"bot/#basics","title":"Basics","text":"Instructions for the bot should always start with bot:
.
To get help from the bot, post a comment with bot: help
.
To make the bot report how it is configured, post a comment with bot: show_config
.
"},{"location":"bot/#permissions","title":"Permissions","text":"The bot is configured to only act on instructions issued by specific GitHub accounts.
There are separate configuration options for allowing to send instructions to the bot, to trigger building of software, and to deploy software installations in to the EESSI repository.
Note
Ask for help in the #software-layer-bot
channel of the EESSI Slack if needed!
"},{"location":"bot/#building","title":"Building","text":"To instruct the bot to build software, one or more build
instructions should be issued by posting a comment in the pull request (see also here).
The most basic build instruction that can be sent to the bot is:
bot: build\n
Warning
Only use bot: build
if you are confident that it is OK to do so.
Most likely, you want to supply one or more filters to avoid that the bot builds for all its configurations.
"},{"location":"bot/#filters","title":"Filters","text":"Build instructions can include filters that are applied by each bot instance to determine which builds should be executed, based on:
instance
: the name
of the bot instance, for example instance:aws
for the bot instance running in AWS; repository
: the target repository, for example eessi-2023.06-software
which corresponds to the 2023.06 version of the EESSI software layer; architecture
: the name of the CPU microarchitecture, for example x86_64/amd/zen2
;
Note
Use :
as separator to specify a value for a particular filter, do not add spaces after the :
.
The bot recognizes shorthands for the supported filters, so you can use inst:...
instead of instance:...
, repo:...
instead of repository:...
, and arch:...
instead of architecture:...
.
"},{"location":"bot/#combining-filters","title":"Combining filters","text":"You can combine multiple filters in a single build
instruction. Separate filters with a space, order of filters does not matter.
For example:
bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen2\n
"},{"location":"bot/#multiple-build-instructions","title":"Multiple build instructions","text":"You can issue multiple build instructions in a single comment, even across multiple bot instances, repositories, and CPU targets. Specify one build instruction per line.
For example:
bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen3 inst:aws\nbot: build repo:eessi-hpc.org-2023.06-software arch:aarch64/generic inst:azure\n
Note
The bot applies the filters with partial matching, which you can use to combine multiple build instructions into a single one.
For example, if you only want to build for all aarch64
CPU targets, you can use arch:aarch64
as filter.
The same applies to the instance
and repository
filters.
"},{"location":"bot/#behind-the-scenes","title":"Behind-the-scenes","text":""},{"location":"bot/#processing-build-instructions","title":"Processing build instructions","text":"When the bot receives build instructions through a comment in a pull request, they are processed by the event handler component. It will:
1) Combine its active configuration (instance name, repositories, supported CPU targets) and the build instructions to prepare a list of jobs to submit;
2) Create a working directory for each job, including a Slurm job script that runs the bot/build.sh
script in the context of the changes proposed in the pull request to build the software, and runs bot/check-result.sh
script at the end to check whether the build was successful;
3) Submit each prepared job to a workernode that can build for the specified CPU target, and put a hold on it.
"},{"location":"bot/#managing-build-jobs","title":"Managing build jobs","text":"During the next iteration of the job manager, the submitted jobs are released and queued for execution.
The job manager also monitors the running jobs at regular intervals, and reports back in the pull request when a job has completed. It also reports the result (SUCCESS
or FAILURE
), based on the result of the bot/check-result.sh
script.
"},{"location":"bot/#artefacts","title":"Artefacts","text":"If all goes well, each job should produce a tarball as an artefact, which contains the software installations and the corresponding environment module files.
The message reported by the job manager provides an overview of the contents of the artefact, which was created by the bot/check-result.sh
script.
"},{"location":"bot/#testing","title":"Testing","text":"Warning
The test phase is not implemented yet in the bot.
We intend to use the EESSI test suite in different OS configurations to verify that the software that was built works as expected.
"},{"location":"bot/#deploying","title":"Deploying","text":"To deploy the artefacts that were obtained in the build phase, you should add the bot: deploy
label to the pull request.
This will trigger the event handler to upload the artefacts for ingestion into the EESSI repository.
"},{"location":"bot/#behind-the-scenes_1","title":"Behind-the-scenes","text":"The current setup for the software-layer repository, is as follows:
- The bot deploys the artefacts (tarballs) to an S3 bucket in AWS, along with a metadata file, using the
eessi-upload-to-staging
script; - A cron job that runs every couple of minutes on the CernVM-FS Stratum-0 server opens a pull request to the (private) EESSI/staging repository, to move the metadata file for each uploaded tarball from the
staged
to the approved
directory; - Once that pull request gets merged, the target is automatically ingested into the EESSI repository by a cron job on the Stratum-0 server, and the metadata file is moved from
approved
to ingested
in the EESSI/staging
repository;
"},{"location":"compatibility_layer/","title":"Compatibility layer","text":"The middle layer of the EESSI project is the compatibility layer, which ensures that our scientific software stack is compatible with different client operating systems (different Linux distributions, macOS and even Windows via WSL).
For this we rely on Gentoo Prefix, by installing a limited set of Gentoo Linux packages in a non-standard location (a \"prefix\"), using Gentoo's package manager Portage.
The compatible layer is maintained via our https://github.com/EESSI/compatibility-layer GitHub repository.
"},{"location":"contact/","title":"Contact info","text":"For more information:
- Visit our website
- Consult our documentation
- Ask for help at our support portal
- Join our Slack channel
- Reach out to one of the project partners
- Check out our GitHub repositories
- Follow us on Twitter
"},{"location":"filesystem_layer/","title":"Filesystem layer","text":""},{"location":"filesystem_layer/#cernvm-file-system-cernvm-fs","title":"CernVM File System (CernVM-FS)","text":"The bottom layer of the EESSI project is the filesystem layer, which is responsible for distributing the software stack.
For this we rely on CernVM-FS (or CVMFS for short), a network file system used to distribute the software to the clients in a fast, reliable and scalable way.
CVMFS was created over 10 years ago specifically for the purpose of globally distributing a large software stack. For the experiments at the Large Hadron Collider, it hosts several hundred million files and directories that are distributed to the order of hundred thousand client computers.
The hierarchical structure with multiple caching layers (Stratum-0, Stratum-1's located at partner sites and local caching proxies) ensures good performance with limited resources. Redundancy is provided by using multiple Stratum-1's at various sites. Since CVMFS is based on the HTTP protocol, the ubiquitous Squid caching proxy can be leveraged to reduce server loads and improve performance at large installations (such as HPC clusters). Clients can easily mount the file system (read-only) via a FUSE (Filesystem in Userspace) module.
For a (basic) introduction to CernVM-FS, see this presentation.
Detailed information about how we configure CVMFS is available at https://github.com/EESSI/filesystem-layer.
"},{"location":"filesystem_layer/#eessi-infrastructure","title":"EESSI infrastructure","text":"For both the pilot and production repositories, EESSI hosts a CernVM-FS Stratum 0 and a number of public Stratum 1 servers. Client systems using EESSI by default connect against the public EESSI CernVM-FS Stratum 1 servers. The status of the infrastructure for the pilot repository is displayed at http://status.eessi-infra.org, while for the production repository it is displayed at https://status.eessi.io.
"},{"location":"gpu/","title":"GPU support","text":"More information on the actions that must be performed to ensure that GPU software included in EESSI can use the GPU in your system is available below.
Please open a support issue if you need help or have questions regarding GPU support.
Make sure the ${EESSI_VERSION}
version placeholder is defined!
In this page, we use ${EESSI_VERSION}
as a placeholder for the version of the EESSI repository, for example:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}\n
Before inspecting paths, or executing any of the specified commands, you should define $EESSI_VERSION
first, for example with:
export EESSI_VERSION=2023.06\n
"},{"location":"gpu/#nvidia","title":"Support for using NVIDIA GPUs","text":"EESSI supports running CUDA-enabled software. All CUDA-enabled modules are marked with the (gpu)
feature, which is visible in the output produced by module avail
.
"},{"location":"gpu/#nvidia_drivers","title":"NVIDIA GPU drivers","text":"For CUDA-enabled software to run, it needs to be able to find the NVIDIA GPU drivers of the host system. The challenge here is that the NVIDIA GPU drivers are not always in a standard system location, and that we can not install the GPU drivers in EESSI (since they are too closely tied to the client OS and GPU hardware).
"},{"location":"gpu/#cuda_sdk","title":"Compiling CUDA software","text":"An additional requirement is necessary if you want to be able to compile CUDA-enabled software using a CUDA installation included in EESSI. This requires a full CUDA SDK, but the CUDA SDK End User License Agreement (EULA) does not allow for full redistribution. In EESSI, we are (currently) only allowed to redistribute the files needed to run CUDA software.
Full CUDA SDK only needed to compile CUDA software
Without a full CUDA SDK on the host system, you will still be able to run CUDA-enabled software from the EESSI stack, you just won't be able to compile additional CUDA software.
Below, we describe how to make sure that the EESSI software stack can find your NVIDIA GPU drivers and (optionally) full installations of the CUDA SDK.
"},{"location":"gpu/#host_injections","title":"host_injections
variant symlink","text":"In the EESSI repository, a special directory has been prepared where system administrators can install files that can be picked up by software installations included in EESSI. This gives the ability to administrators to influence the behaviour (and capabilities) of the EESSI software stack.
This special directory is located in /cvmfs/software.eessi.io/host_injections
, and it is a CernVM-FS Variant Symlink: a symbolic link for which the target can be controlled by the CernVM-FS client configuration (for more info, see 'Variant Symlinks' in the official CernVM-FS documentation).
Default target for host_injections
variant symlink
Unless otherwise configured in the CernVM-FS client configuration for the EESSI repository, the host_injections
symlink points to /opt/eessi
on the client system:
$ ls -l /cvmfs/software.eessi.io/host_injections\nlrwxrwxrwx 1 cvmfs cvmfs 10 Oct 3 13:51 /cvmfs/software.eessi.io/host_injections -> /opt/eessi\n
As an example, let's imagine that we want to use a architecture-specific location on a shared filesystem as the target for the symlink. This has the advantage that one can make changes under host_injections
that affect all nodes which share that CernVM-FS configuration. Configuring this in your CernVM-FS configuration would mean adding the following line in the client configuration file:
EESSI_HOST_INJECTIONS=/shared_fs/path\n
Don't forget to reload the CernVM-FS configuration
After making a change to a CernVM-FS configuration file, you also need to reload the configuration:
sudo cvmfs_config reload\n
All CUDA-enabled software in EESSI expects the CUDA drivers to be available in a specific subdirectory of this host_injections
directory. In addition, installations of the CUDA SDK included EESSI are stripped down to the files that we are allowed to redistribute; all other files are replaced by symbolic links that point to another specific subdirectory of host_injections
. For example:
$ ls -l /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\nlrwxrwxrwx 1 cvmfs cvmfs 109 Dec 21 14:49 /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc -> /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\n
If the corresponding full installation of the CUDA SDK is available there, the CUDA installation included in EESSI can be used to build CUDA software.
"},{"location":"gpu/#nvidia_eessi_native","title":"Using NVIDIA GPUs via a native EESSI installation","text":"Here, we describe the steps to enable GPU support when you have a native EESSI installation on your system.
Required permissions
To enable GPU support for EESSI on your system, you will typically need to have system administration rights, since you need write permissions on the folder to the target directory of the host_injections
symlink.
"},{"location":"gpu/#exposing-nvidia-gpu-drivers","title":"Exposing NVIDIA GPU drivers","text":"To install the symlinks to your GPU drivers in host_injections
, run the link_nvidia_host_libraries.sh
script that is included in EESSI:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/link_nvidia_host_libraries.sh\n
This script uses ldconfig
on your host system to locate your GPU drivers, and creates symbolic links to them in the correct location under host_injections
directory. It also stores the CUDA version supported by the driver that the symlinks were created for.
Re-run link_nvidia_host_libraries.sh
after NVIDIA GPU driver update
You should re-run this script every time you update the NVIDIA GPU drivers on the host system.
Note that it is safe to re-run the script even if no driver updates were done: the script should detect that the current version of the drivers were already symlinked.
"},{"location":"gpu/#installing-full-cuda-sdk-optional","title":"Installing full CUDA SDK (optional)","text":"To install a full CUDA SDK under host_injections
, use the install_cuda_host_injections.sh
script that is included in EESSI:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh\n
For example, to install CUDA 12.1.1 in the directory that the host_injections
variant symlink points to, using /tmp/$USER/EESSI
as directory to store temporary files:
/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --cuda-version 12.1.1 --temp-dir /tmp/$USER/EESSI --accept-cuda-eula\n
You should choose the CUDA version you wish to install according to what CUDA versions are included in EESSI; see the output of module avail CUDA/
after setting up your environment for using EESSI. You can run /cvmfs/software.eessi.io/scripts/install_cuda_host_injections.sh --help
to check all of the options.
Tip
This script uses EasyBuild to install the CUDA SDK. For this to work, two requirements need to be satisfied:
module load EasyBuild
should work (or the eb
command is already available in the environment); - The version of EasyBuild being used should provide the requested version of the CUDA easyconfig file (in the example case above, that's
CUDA-12.1.1.eb
).
You can rely on the EasyBuild installation that is included in EESSI for this.
Alternatively, you may load an EasyBuild module manually before running the install_cuda_host_injections.sh
script to make an eb
command available.
"},{"location":"gpu/#nvidia_eessi_container","title":"Using NVIDIA GPUs via EESSI in a container","text":"We focus here on the Apptainer/Singularity use case, and have only tested the --nv
option to enable access to GPUs from within the container.
If you are using the EESSI container to access the EESSI software, the procedure for enabling GPU support is slightly different and will be documented here eventually.
"},{"location":"gpu/#exposing-nvidia-gpu-drivers_1","title":"Exposing NVIDIA GPU drivers","text":"When running a container with apptainer
or singularity
it is not necessary to run the install_cuda_host_injections.sh
script since both these tools use $LD_LIBRARY_PATH
internally in order to make the host GPU drivers available in the container.
The only scenario where this would be required is if $LD_LIBRARY_PATH
is modified or undefined.
"},{"location":"gpu/#gpu_cuda_testing","title":"Testing the GPU support","text":"The quickest way to test if software installations included in EESSI can access and use your GPU is to run the deviceQuery
executable that is part of the CUDA-Samples
module:
module load CUDA-Samples\ndeviceQuery\n
If both are successful, you should see information about your GPU printed to your terminal."},{"location":"meetings/","title":"Meetings","text":""},{"location":"meetings/#monthly-meetings-online","title":"Monthly meetings (online)","text":"Online EESSI update meeting, every 1st Thursday of the month at 14:00 CE(S)T.
More info can be found on the EESSI wiki.
"},{"location":"meetings/#physical-meetings","title":"Physical meetings","text":" - EESSI Community Meeting in Amsterdam (NL), 14-16 Sept 2022
"},{"location":"meetings/#physical-meetings-archive","title":"Physical meetings (archive)","text":""},{"location":"meetings/#2020","title":"2020","text":" - Meeting in Groningen (NL), 16 Jan 2020
- Meeting in Delft (NL), 5 Mar 2020
"},{"location":"meetings/#2019","title":"2019","text":" - Meeting in Cambridge (UK), 20-21 May 2019
"},{"location":"overview/","title":"Overview of the EESSI project","text":""},{"location":"overview/#scope-goals","title":"Scope & Goals","text":"Through the EESSI project, we want to set up a shared stack of scientific software installations, and by doing so avoid a lot of duplicate work across HPC sites.
For end users, we want to provide a uniform user experience with respect to available scientific software, regardless of which system they use.
Our software stack should work on laptops, personal workstations, HPC clusters and in the cloud, which means we will need to support different CPUs, networks, GPUs, and so on. We hope to make this work for any Linux distribution and maybe even macOS and Windows via WSL, and a wide variety of CPU architectures (Intel, AMD, ARM, POWER, RISC-V).
Of course we want to focus on the performance of the software, but also on automating the workflow for maintaining the software stack, thoroughly testing the installations, and collaborating efficiently.
"},{"location":"overview/#inspiration","title":"Inspiration","text":"The EESSI concept is heavily inspired by Compute Canada software stack, which is a shared software stack used on all 5 major national systems in Canada and a bunch of smaller ones.
The design of the Compute Canada software stack is discussed in detail in the PEARC'19 paper \"Providing a Unified Software Environment for Canada\u2019s National Advanced Computing Centers\".
It has also been presented at the 5th EasyBuild User Meetings (slides, recorded talk), and is well documented.
"},{"location":"overview/#layered-structure","title":"Layered structure","text":"The EESSI project consists of 3 layers.
The bottom layer is the filesystem layer, which is responsible for distributing the software stack across clients.
The middle layer is a compatibility layer, which ensures that the software stack is compatible with multiple different client operating systems.
The top layer is the software layer, which contains the actual scientific software applications and their dependencies.
The host OS still provides a couple of things, like drivers for network and GPU, support for shared filesystems like GPFS and Lustre, a resource manager like Slurm, and so on.
"},{"location":"overview/#opportunities","title":"Opportunities","text":"We hope to collaborate with interested parties across the HPC community, including HPC centres, vendors, consultancy companies and scientific software developers.
Through our software stack, HPC users can seamlessly hop between sites, since the same software is available everywhere.
We can leverage each others work with respect to providing tested and properly optimized scientific software installations more efficiently, and provide a platform for easy benchmarking of new systems.
By working together with the developers of scientific software we can provide vetted installations for the broad HPC community.
"},{"location":"overview/#challenges","title":"Challenges","text":"There are many challenges in an ambitious project like this, including (but probably not limited to):
- Finding time and manpower to get the software stack set up properly;
- Leveraging system sources like network interconnect (MPI & co), accelerators (GPUs), ...;
- Supporting CPU architectures other than x86_64, including ARM, POWER, RISC-V, ...
- Dealing with licensed software, like Intel tools, MATLAB, ANSYS, ...;
- Integration with resource managers (Slurm) and vendor provided software (Cray PE);
- Convincing HPC site admins to adopt EESSI;
"},{"location":"overview/#current-status","title":"Current status","text":"(June 2020)
We are actively working on the EESSI repository, and are organizing monthly meetings to discuss progress and next steps forward.
Keep an eye on our GitHub repositories at https://github.com/EESSI and our Twitter feed.
"},{"location":"partners/","title":"Project partners","text":""},{"location":"partners/#delft-university-of-technology-the-netherlands","title":"Delft University of Technology (The Netherlands)","text":" - Robbert Eggermont
- Koen Mulderij
"},{"location":"partners/#dell-technologies-europe","title":"Dell Technologies (Europe)","text":" - Walther Blom, High Education & Research
- Jaco van Dijk, Higher Education
"},{"location":"partners/#eindhoven-university-of-technology","title":"Eindhoven University of Technology","text":" - Alain van Hoof, HPC-Lab
"},{"location":"partners/#ghent-university-belgium","title":"Ghent University (Belgium)","text":" - Kenneth Hoste, HPC-UGent
"},{"location":"partners/#hpcnow-spain","title":"HPCNow! (Spain)","text":" - Oriol Mula Valls
"},{"location":"partners/#julich-supercomputing-centre-germany","title":"J\u00fclich Supercomputing Centre (Germany)","text":" - Alan O'Cais
"},{"location":"partners/#university-of-cambridge-united-kingdom","title":"University of Cambridge (United Kingdom)","text":" - Mark Sharpley, Research Computing Services Division
"},{"location":"partners/#university-of-groningen-the-netherlands","title":"University of Groningen (The Netherlands)","text":" - Bob Dr\u00f6ge, Center for Information Technology
- Henk-Jan Zilverberg, Center for Information Technology
"},{"location":"partners/#university-of-twente-the-netherlands","title":"University of Twente (The Netherlands)","text":" - Geert Jan Laanstra, Electrical Engineering, Mathematics and Computer Science (EEMCS)
"},{"location":"partners/#university-of-oslo-norway","title":"University of Oslo (Norway)","text":" - Terje Kvernes
"},{"location":"partners/#university-of-bergen-norway","title":"University of Bergen (Norway)","text":" - Thomas R\u00f6blitz
"},{"location":"partners/#vrije-universiteit-amsterdam-the-netherlands","title":"Vrije Universiteit Amsterdam (The Netherlands)","text":" - Peter Stol
"},{"location":"partners/#surf-the-netherlands","title":"SURF (The Netherlands)","text":" - Caspar van Leeuwen
- Marco Verdicchio
- Bas van der Vlies
"},{"location":"software_layer/","title":"Software layer","text":"The top layer of the EESSI project is the software layer, which provides the actual scientific software installations.
To install the software we include in our stack, we use EasyBuild, a framework for installing scientific software on HPC systems. These installations are optimized for a particular system architecture (specific CPU and GPU generation).
To access these software installation we provide environment module files and use Lmod, a modern environment modules tool which has been widely adopted in the HPC community in recent years.
We leverage the archspec Python library to automatically select the best suited part of the software stack for a particular host, based on its system architecture.
The software layer is maintained through our https://github.com/EESSI/software-layer GitHub repository.
"},{"location":"software_testing/","title":"Software testing","text":"This page has been replaced with test-suite, update your bookmarks!
"},{"location":"support/","title":"Getting support for EESSI","text":"Thanks to the MultiXscale EuroHPC project we are able to provide support to the users of EESSI.
The EESSI support portal is hosted in GitLab: https://gitlab.com/eessi/support.
"},{"location":"support/#open-issue","title":"How to report a problem or ask a question","text":"We recommend you to use a GitLab account if you want to get help from the EESSI support team.
If you have a GitLab account you can submit your problems or questions on EESSI via the issue tracker of the EESSI support portal at https://gitlab.com/eessi/support/-/issues. Please use one of the provided templates (report a problem, software request, question, ...) when creating an issue.
You can also contact us via our e-mail address support (@) eessi.io
, which will automatically create a (private) issue in the EESSI support portal. When you send us an email, please provide us with as much information as possible on your question or problem. You can find an overview of the information that we would like to receive in the README of the EESSI support portal.
"},{"location":"support/#level-of-support","title":"Level of Support","text":"We provide support for EESSI according to a \"reasonable effort\" standard. That means we will go into reasonable effort to help you, but we may not have the time to explore every potential cause, and it may not lead to a (quick) solution. You can compare this to the level of support you typically get from other active open source projects.
Note that the more complete your reported issue is (e.g. description of the error, what you ran, the software environment in which you ran, minimal reproducer, etc.) the bigger the chance is that we can help you with \"reasonable effort\".
"},{"location":"support/#what-do-we-provide-support-for","title":"What do we provide support for","text":""},{"location":"support/#accessing-and-using-the-eessi-software-stack","title":"Accessing and using the EESSI software stack","text":"If you have trouble connecting to the software stack, such as trouble related to installing or configuring CernVM-FS to access the EESSI filesystem layer, or running the software installations included in the EESSI compatibility layer or software layer, please contact us.
Note that we can only help with problems related to the software installations (getting the software to run, to perform as expected, etc.). We do not provide support for using specific features of the provided software, nor can we fix (known or unknown) bugs in the software included in EESSI. We can only help with diagnosing and fixing problems that are caused by how the software was built and installed in EESSI.
"},{"location":"support/#software-requests","title":"Software requests","text":"We are open to software requests for software that is not included in EESSI yet.
The quickest way to add additional software to EESSI is by contributing it yourself as a community contribution, please see the documentation on adding software.
Alternatively, you can send in a request to our support team. Please try to provide as much information on the software as possible: preferably use the issue template (which requires you to log in to GitLab), or make sure to cover the items listed here.
Be aware that we can only provide software that has an appropriate open source license.
"},{"location":"support/#eessi-test-suite","title":"EESSI test suite","text":"If you are using the EESSI test suite, you can get help via the EESSI support portal.
"},{"location":"support/#build-and-deploy-bot","title":"Build-and-deploy bot","text":"If you are using the EESSI build-and-deploy bot, you can get help via the EESSI support portal.
"},{"location":"support/#what-do-we-not-provide-support-for","title":"What do we not provide support for","text":"Do not contact the EESSI support team to get help with using software that is included in EESSI, unless you think the problems you are seeing are related to how the software was built and installed.
Please consult the documentation of the software you are using, or contact the developers of the software directly, if you have questions regarding using the software, or if you think you have found a bug.
Funded by the European Union. This work has received funding from the European High Performance Computing Joint Undertaking (JU) and countries participating in the project under grant agreement No 101093169.
"},{"location":"talks/","title":"Talks related to EESSI","text":""},{"location":"talks/#2023","title":"2023","text":" - Streaming Optimised Scientific Software: an Introduction to EESSI (online tutorial, 5 Dec 2023)
- Best Practices for CernVM-FS in HPC (online tutorial, 4 Dec 2023)
- Streaming optimized scientific software installations on any Linux distro with EESSI (PackagingCon 2023, 27 Oct 2023)
- Making scientific software EESSI - and fast (8-min AWS HPC Tech Short, 15 June 2023)
"},{"location":"adding_software/building_software/","title":"Building software","text":"(for maintainers)
"},{"location":"adding_software/building_software/#bot_build","title":"Instructing the bot to build","text":"Once the pull request is open, you can instruct the bot to build the software by posting a comment.
For more information, see the building section in the bot documentation.
Warning
Permission to trigger building of software must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/building_software/#guidelines","title":"Guidelines","text":" -
It may be wise to let the bot perform a test build first, rather than letting it build for a wide range of CPU targets.
-
If one of the builds failed, you can let the bot retry that specific build.
-
Make sure that the software has been built correctly for all CPU targets before you deploy!
"},{"location":"adding_software/building_software/#checking-the-builds","title":"Checking the builds","text":"If all goes well, you should see SUCCESS
for each build, along with button to get more information about the checks that were performed, and metadata information on the resulting artefact .
Note
Make sure the result is what you expect it to be for all builds before you deploy!
"},{"location":"adding_software/building_software/#failing-builds","title":"Failing builds","text":"Warning
The bot will currently not give you any information on how or why a build is failing.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/building_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.
For more information, see the deploying section in the bot documentation.
Warning
Permission to trigger deployment of software installations must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/building_software/#merging-the-pull-request","title":"Merging the pull request","text":"You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.
Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.
Note
This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml
) that checks for missing installations, in the correct branch (for example 2023.06
) of the software-layer.
If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!
Warning
You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/building_software/#getting-help","title":"Getting help","text":"If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer
channel of the EESSI Slack.
"},{"location":"adding_software/contribution_policy/","title":"Contribution policy","text":"(version v0.1.0 - updated 9 Nov 2023)
Note
This policy is subject to change, please check back regularly.
"},{"location":"adding_software/contribution_policy/#purpose","title":"Purpose","text":"The purpose of this contribution policy is to provide guidelines for adding software to EESSI.
It informs about what requirements must be met in order for software to be eligible for inclusion in the EESSI software layer.
"},{"location":"adding_software/contribution_policy/#requirements","title":"Requirements","text":"The following requirements must be taken into account when adding software to EESSI.
Note that additional restrictions may apply in specific cases that are currently not covered explicitly by this policy.
"},{"location":"adding_software/contribution_policy/#freely_redistributable_software","title":"i) Freely redistributable software","text":"Only freely redistributable software can be added to the EESSI repository, and we strongly prefer including only open source software in EESSI.
Make sure that you are aware of the relevant software licenses, and that redistribution of the software you want to add to EESSI is allowed.
For more information about a specific software license, see the SPDX license list.
Note
We intend to automatically verify that this requirement is met, by requiring that the SPDX license identifier is provided for all software included in EESSI.
"},{"location":"adding_software/contribution_policy/#built_by_bot","title":"ii) Built by the bot","text":"All software included in the EESSI repository must be built autonomously by our bot .
For more information, see our semi-automatic software installation procedure.
"},{"location":"adding_software/contribution_policy/#easybuild","title":"iii) Built and installed with EasyBuild","text":"We currently require that all software installations in EESSI are built and installed using EasyBuild.
We strongly prefer that the latest release of EasyBuild that is available at the time is used to add software to EESSI.
The use of --from-pr
and --include-easyblocks-from-pr
to pull in changes to EasyBuild that are required to make the installation work correctly in EESSI is allowed, but only if that is strictly required (that is, if those changes are not included yet in the latest EasyBuild release).
"},{"location":"adding_software/contribution_policy/#supported_toolchain","title":"iv) Supported compiler toolchain","text":"A compiler toolchain that is still supported by the latest EasyBuild release must be used for building the software.
For more information on supported toolchains, see the EasyBuild toolchain support policy.
"},{"location":"adding_software/contribution_policy/#recent_toolchains","title":"v) Recent toolchain versions","text":"We strongly prefer adding software to EESSI that was built with a recent compiler toolchain.
When adding software to a particular version of EESSI, you should use a toolchain version that is already installed.
If you would like to see an additional toolchain version being added to a particular version of EESSI, please open a support request for this, and motivate your request.
"},{"location":"adding_software/contribution_policy/#recent_software_versions","title":"vi) Recent software versions","text":"We strongly prefer adding sufficiently recent software versions to EESSI.
If you would like to add older software versions, please clearly motivate the need for this in your contribution.
"},{"location":"adding_software/contribution_policy/#cpu_targets","title":"vii) CPU targets","text":"Software that is added to EESSI should work on all supported CPU targets.
Exceptions to this requirement are allowed if technical problems that can not be resolved with reasonable effort prevent the installation of the software for specific CPU targets.
"},{"location":"adding_software/contribution_policy/#testing","title":"viii) Testing","text":"We should be able to test the software installations via the EESSI test suite, in particular for software applications and user-facing tools.
Ideally one or more tests are available that verify that the software is functionally correct, and that it (still) performs well.
Tests that are run during the software installation procedure as performed by EasyBuild must pass. Exceptions can be made if only a small subset of tests fail for specific CPU targets, as long as these exceptions are tracked and an effort is made to assess the impact of those failing tests.
It should be possible to run a minimal smoke test for the software included in EESSI, for example using EasyBuild's --sanity-check-only
feature.
Note
The EESSI test suite is still in active development, and currently only has a minimal set of tests available.
When the test suite is more mature, this requirement will be enforced more strictly.
"},{"location":"adding_software/contribution_policy/#changelog","title":"Changelog","text":""},{"location":"adding_software/contribution_policy/#v010-9-nov-2023","title":"v0.1.0 (9 Nov 2023)","text":" - initial contribution policy
"},{"location":"adding_software/debugging_failed_builds/","title":"Debugging failed builds","text":"(for contributors + maintainers)
Unfortunately, software does not always build successfully. Since EESSI targets novel CPU architectures as well, build failures on such platforms are quite common, as the software and/or the software build systems have not always been adjusted to support these architectures yet.
In EESSI, all software packages are built by a bot. This is great for builds that complete successfully as we can build many software packages for a wide range of hardware with little human intervention. However, it does mean that you, as contributor, can not easily access the build directory and build logs to figure out build issues.
This page describes how you can interactively reproduce failed builds, so that you can more easily debug the issue.
Throughout this page, we will use this PR as an example. It intends to add LAMMPS to EESSI. Among other issues, it failed on a building Plumed.
"},{"location":"adding_software/debugging_failed_builds/#prerequisites","title":"Prerequisites","text":"You will need to have:
- Access to a machine with the hardware for which the build that you want to debug failed.
- On that machine, meet the requirements for running the EESSI container, as described on this page.
"},{"location":"adding_software/debugging_failed_builds/#preparing-the-environment","title":"Preparing the environment","text":"A number of steps are needed to create the same environment in which the bot builds.
- Fetching the feature branch from which you want to replicate a build.
- Starting a shell in the EESSI container.
- Start the Gentoo Prefix environment.
- Start the EESSI software environment.
- Configure EasyBuild.
"},{"location":"adding_software/debugging_failed_builds/#fetching-the-feature-branch","title":"Fetching the feature branch","text":"Looking at the example PR, we see the PR is created from this fork. First, we clone the fork, then checkout the feature branch (LAMMPS_23Jun2022
)
git clone https://github.com/laraPPr/software-layer/\ncd software-layer\ngit checkout LAMMPS_23Jun2022\n
Alternatively, if you already have a clone of the software-layer
you can add it as a new remote cd software-layer\ngit remote add laraPPr https://github.com/laraPPr/software-layer/\ngit fetch laraPPr\ngit checkout LAMMPS_23Jun2022\n
"},{"location":"adding_software/debugging_failed_builds/#starting-a-shell-in-the-eessi-container","title":"Starting a shell in the EESSI container","text":"Simply run the EESSI container (eessi_container.sh
), which should be in the root of the software-layer
repository
./eessi_container.sh --access rw\n
If you want to install NVIDIA GPU software, make sure to also add the --nvidia all
argument, to insure that your GPU drivers get mounted inside the container:
./eessi_container.sh --access rw --nvidia all\n
Note
You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS:
have been printed after the first prompt Apptainer>
was shown.
"},{"location":"adding_software/debugging_failed_builds/#more-efficient-approach-for-multiplecontinued-debugging-sessions","title":"More efficient approach for multiple/continued debugging sessions","text":"While the above works perfectly well, you might not be able to complete your debugging session in one go. With the above approach, several steps will just be repeated every time you start a debugging session:
- Downloading the container
- Installing
CUDA
in your host injections directory (only if you use the EESSI-install-software.sh
script, see below) - Installing all dependencies (before you get to the package that actually fails to build)
To avoid this, we create two directories. One holds the container & host_injections
, which are (typically) common between multiple PRs and thus you don't have to redownload the container / reinstall the host_injections
if you start working on another PR. The other will hold the PR-specific data: a tarball storing the software you'll build in your interactive debugging session. The paths we pick here are just example, you can pick any persistent, writeable location for this:
eessi_common_dir=${HOME}/eessi-manual-builds\neessi_pr_dir=${HOME}/pr360\n
Now, we start the container
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}\n
Here, the SINGULARITY_CACHEDIR
makes sure that if the container was already downloaded, and is present in the cache, it is not redownloaded. The host injections will just be picked up from ${eessi_common_dir}/host_injections
(if those were already installed before). And finally, the --save
makes sure that everything that you build in the container gets stored in a tarball as soon as you exit the container.
Note that the first exit
command will first make you exit the Gentoo prefix environment. Only the second will take you out of the container, and print where the tarball will be stored:
[EESSI 2023.06] $ exit\nlogout\nLeaving Gentoo Prefix with exit status 1\nApptainer> exit\nexit\nSaved contents of tmp directory '/tmp/eessi-debug.VgLf1v9gf0' to tarball '${HOME}/pr360/EESSI-1698056784.tgz' (to resume session add '--resume ${HOME}/pr360/EESSI-1698056784.tgz')\n
Note that the tarballs can be quite sizeable, so make sure to pick a filesystem where you have a large enough quotum.
Next time you want to continue investigating this issue, you can start the container with --resume DIR/TGZ
and continue where you left off, having all dependencies already built and available.
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}/EESSI-1698056784.tgz\n
For a detailed description on using the script eessi_container.sh
, see here.
Note
Reusing a previously downloaded container, or existing CUDA installation from a host_injections
is not be a good approach if those could be the cause of your issues. If you are unsure if this is the case, simply follow the regular approach to starting the EESSI container.
Note
It is recommended to clean the container cache and host_injections
directories every now and again, to make sure you pick up the latest changes for those two components.
"},{"location":"adding_software/debugging_failed_builds/#start-the-gentoo-prefix-environment","title":"Start the Gentoo Prefix environment","text":"The next step is to start the Gentoo Prefix environment.
Before we start, check the current values of ${EESSI_CVMFS_REPO}
and ${EESSI_VERSION}
so that you can reset them later:
echo ${EESSI_CVMFS_REPO}\necho ${EESSI_VERSION}\n
Then, we set EESSI_OS_TYPE
and EESSI_CPU_FAMILY
and run the startprefix
command to start the Gentoo Prefix environment:
export EESSI_OS_TYPE=linux # We only support Linux for now\nexport EESSI_CPU_FAMILY=$(uname -m)\n${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/startprefix\n
Now, reset the ${EESSI_CVMFS_REPO}
and ${EESSI_VERSION}
in your prefix environment with the initial values (printed in the echo statements above)
export EESSI_CVMFS_REPO=...\nexport EESSI_VERSION=...\n
Note
By activating the Gentoo Prefix environment, the system tools (e.g. ls
) you would normally use are now provided by Gentoo Prefix, instead of the container OS. E.g. running which ls
after starting the prefix environment as above will return /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/bin/ls
. This makes the builds completely independent from the container OS.
"},{"location":"adding_software/debugging_failed_builds/#building-for-the-generic-optimization-target","title":"Building for the generic
optimization target","text":"If you want to replicate a build with generic
optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic
) you will need to set the following environment variable:
export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-with-the-eessi-install-softwaresh-script","title":"Building software with the EESSI-install-software.sh
script","text":"The Automatic build and deploy bot installs software by executing the EESSI-install-software.sh
script. The advantage is that running this script is the closest you can get to replicating the bot's behaviour - and thus the failure. The downside is that if a PR adds a lot of software, it may take quite a long time to run - even if you might already know what the problematic software package is. In that case, you might be better off following the steps under Building software from an easystack file or Building an individual package.
Note that you could also combine approaches: first build everything using the EESSI-install-software.sh
script, until you reproduce the failure. Then, start making modifications (e.g. changes to the EasyConfig, patches, etc) and trying to rebuild that package individually to test your changes.
To build software using the EESSI-install-software.sh
script, you'll first need to get the diff file for the PR. This is used by the EESSI-install-software.sh
script to see what is changed in this PR - and thus what needs to be build for this PR. To download the diff for PR 360, we would e.g. do
wget https://github.com/EESSI/software-layer/pull/360.diff\n
Now, we run the EESSI-install-software.sh
script:
./EESSI-install-software.sh\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-from-an-easystack-file","title":"Building software from an easystack file","text":""},{"location":"adding_software/debugging_failed_builds/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"To activate the software environment, run
source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n
Note
If you get an error bash: /versions//init/bash: No such file or directory
, you forgot to reset the ${EESSI_CVFMS_REPO}
and ${EESSI_VERSION}
environment variables at the end of the previous step.
Note
If you want to build with generic optimization, you should run export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic
before sourcing.
For more info on starting the EESSI software environment, see here
"},{"location":"adding_software/debugging_failed_builds/#configure-easybuild","title":"Configure EasyBuild","text":"It is important that we configure EasyBuild in the same way as the bot uses it, with one small exceptions: our working directory will be different. Typically, that doesn't matter, but it's good to be aware of this one difference, in case you fail to replicate the build failure.
In this example, we create a unique temporary directory inside /tmp
to serve both as our workdir. Finally, we will source the configure_easybuild
script, which will configure EasyBuild by setting environment variables.
export WORKDIR=$(mktemp --directory --tmpdir=/tmp -t eessi-debug.XXXXXXXXXX)\nsource configure_easybuild\n
Among other things, the configure_easybuild
script sets the install path for EasyBuild to point to the correct installation directory in (to ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_SOFTWARE_SUBDIR}
). This is the exact same path the bot
uses to build, and uses a writeable overlay filesystem in the container to write to a path in /cvmfs
(which normally is read-only). This is identical to what the bot
does. Note
If you started the container using --resume, you may want WORKDIR to point to the workdir you created previously (instead of creating a new, temporary directory with mktemp
).
Note
If you want to replicate a build with generic
optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic
) you will need to set export EASYBUILD_OPTARCH=GENERIC
after sourcing configure_easybuild
.
Next, we need to determine the correct version of EasyBuild to load. Since the example PR changes the file eessi-2023.06-eb-4.8.1-2021b.yml
, this tells us the bot was using version 4.8.1
of EasyBuild to build this. Thus, we load that version of the EasyBuild module and check if everything was configured correctly:
module load EasyBuild/4.8.1\neb --show-config\n
You should get something similar to #\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath (E) = /tmp/easybuild/easybuild/build\ncontainerpath (E) = /tmp/easybuild/easybuild/containers\ndebug (E) = True\nexperimental (E) = True\nfilter-deps (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib, Yasm\nfilter-env-vars (E) = LD_LIBRARY_PATH\nhooks (E) = ${HOME}/software-layer/eb_hooks.py\nignore-osdeps (E) = True\ninstallpath (E) = /tmp/easybuild/software/linux/aarch64/neoverse_n1\nmodule-extensions (E) = True\npackagepath (E) = /tmp/easybuild/easybuild/packages\nprefix (E) = /tmp/easybuild/easybuild\nread-only-installdir (E) = True\nrepositorypath (E) = /tmp/easybuild/easybuild/ebfiles_repo\nrobot-paths (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/aarch64/neoverse_n1/software/EasyBuild/4.8.1/easybuild/easyconfigs\nrpath (E) = True\nsourcepath (E) = /tmp/easybuild/easybuild/sources:\nsysroot (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/aarch64\ntrace (E) = True\nzip-logs (E) = bzip2\n
"},{"location":"adding_software/debugging_failed_builds/#building-everything-in-the-easystack-file","title":"Building everything in the easystack file","text":"In our example PR, the easystack file that was changed was eessi-2023.06-eb-4.8.1-2021b.yml
. To build this, we run (in the directory that contains the checkout of this feature branch):
eb --easystack eessi-2023.06-eb-4.8.1-2021b.yml --robot\n
After some time, this build fails while trying to build Plumed
, and we can access the build log to look for clues on why it failed."},{"location":"adding_software/debugging_failed_builds/#building-an-individual-package","title":"Building an individual package","text":"First, prepare the environment by following the [Starting the EESSI software environment][#starting-the-eessi-software-environment] and Configure EasyBuild above.
In our example PR, the individual package that was added to eessi-2023.06-eb-4.8.1-2021b.yml
was LAMMPS-23Jun2022-foss-2021b-kokkos.eb
. To mimic the build behaviour, we'll also have to (re)use any options that are listed in the easystack file for LAMMPS-23Jun2022-foss-2021b-kokkos.eb
, in this case the option --from-pr 19000
. Thus, to build, we run:
eb LAMMPS-23Jun2022-foss-2021b-kokkos.eb --robot --from-pr 19000\n
After some time, this build fails while trying to build Plumed
, and we can access the build log to look for clues on why it failed. Note
While this might be faster than the easystack-based approach, this is not how the bot builds. So why it may reproduce the failure the bot encounters, it may not reproduce the bug at all (no failure) or run into different bugs. If you want to be sure, use the easystack-based approach.
"},{"location":"adding_software/debugging_failed_builds/#rebuilding-software","title":"Rebuilding software","text":"Rebuilding software requires an additional step at the beginning: the software first needs to be removed. We assume you've already checked out the feature branch. Then, you need to start the container with the additional --fakeroot
argument, otherwise you will not be able to remove files from the /cvmfs
prefix. Make sure to also include the --save
argument, as we will need the tarball later on. E.g.
SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir} --fakeroot\n
Then, initialize the EESSI environment source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n
and get the diff file for the corresponding PR, e.g. for PR 123: wget https://github.com/EESSI/software-layer/pull/123.diff\n
Finally, run the EESSI-remove-software.sh
script ./EESSI-remove-software.sh`\n
This should remove any software specified in a rebuild easystack that got added in your current feature branch.
Now, exit the container, paying attention to the instructions that are printed to resume later, e.g.:
Saved contents of tmp directory '/tmp/eessi.WZxeFUemH2' to tarball '/home/myuser/pr507/EESSI-1711538681.tgz' (to resume session add '--resume /home/myuser/pr507/EESSI-1711538681.tgz')\n
Now, continue with the original instructions to start the container (i.e. either here or with this alternate approach) and make sure to add the --resume
flag. This way, you are resuming from the tarball (i.e. with the software removed that has to be rebuilt), but in a new container in which you have regular (i.e. no root) permissions.
"},{"location":"adding_software/debugging_failed_builds/#running-the-test-step","title":"Running the test step","text":"If you are still in the prefix layer (i.e. after previously building something), exit it first:
$ exit\nlogout\nLeaving Gentoo Prefix with exit status 0\n
Then, source the EESSI init script (again): Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} Apptainer>\n
Note
If you are in a SLURM environment, make sure to run for i in $(env | grep SLURM); do unset \"${i%=*}\"; done
to unset any SLURM environment variables. Failing to do so will cause mpirun
to pick up on these and e.g. infer how many slots are available. If you run into errors of the form \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\", you probably forgot this step.
Then, execute the run_tests.sh
script. We are assuming you are still in the root of the software-layer
repository that you cloned earlier:
./run_tests.sh\n
if all goes well, you should see (part of) the EESSI test suite being run by ReFrame, finishing with something like [ PASSED ] Ran X/Y test case(s) from Z check(s) (0 failure(s), 0 skipped, 0 aborted)\n
Note
If you are running on a system with hyperthreading enabled, you may still run into the \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\" error from mpirun
, because hardware threads are not considered to be slots by default by OpenMPIs mpirun
. In this case, run with OMPI_MCA_hwloc_base_use_hwthreads_as_cpus=1 ./run_tests.sh
(for OpenMPI 4.X) or PRTE_MCA_rmaps_default_mapping_policy=:hwtcpus ./run_tests.sh
(for OpenMPI 5.X).
"},{"location":"adding_software/debugging_failed_builds/#known-causes-of-issues-in-eessi","title":"Known causes of issues in EESSI","text":""},{"location":"adding_software/debugging_failed_builds/#the-custom-system-prefix-of-the-compatibility-layer","title":"The custom system prefix of the compatibility layer","text":"Some installations might expect the system root (sysroot, for short) to be in /
. However, in case of EESSI, we are building against the OS in the compatibility layer. Thus, our sysroot is something like ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}
. This can cause issues if installation procedures assume the sysroot is in /
.
One example of a sysroot issue was in installing wget
. The EasyConfig for wget
defined
# make sure pkg-config picks up system packages (OpenSSL & co)\npreconfigopts = \"export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl '\n
This will not work in EESSI, since the OpenSSL should be picked up from the compatibility layer. This was fixed by changing the EasyConfig to read preconfigopts = \"export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl\n
The %(sysroot)s
is a template value which EasyBuild will resolve to the value that has been configured in EasyBuild for sysroot
(it is one of the fields printed by eb --show-config
if a non-standard sysroot is configured). If you encounter issues where the installation can not find something that is normally provided by the OS (i.e. not one of the dependencies in your module environment), you may need to resort to a similar approach.
"},{"location":"adding_software/debugging_failed_builds/#the-writeable-overlay","title":"The writeable overlay","text":"The writeable overlay in the container is known to be a bit slow sometimes. Thus, we have seen tests failing because they exceed some timeout (e.g. this issue).
To investigate if the writeable overlay is somehow the issue, you can make sure the installation gets done somewhere else, e.g. in the temporary directory in /tmp
that you created as workdir. To do this, set
export EASYBUILD_INSTALLPATH=${WORKDIR}\n
after the step in which you have sourced the configure_easybuild
script. Note that in order to find (with module av
) any modules that get installed here, you will need to add this path to the MODULEPATH
:
module use ${EASYBUILD_INSTALLPATH}/modules/all\n
Then, retry building the software (as described above). If the build now succeeds, you know that indeed the writeable overlay caused the issue. We have to build in this writeable overlay when we do real deployments. Thus, if you hit such a timeout, try to see if you can (temporarily) modify the timeout value in the test so that it passes.
"},{"location":"adding_software/deploying_software/","title":"Deploying software","text":"(for maintainers)
"},{"location":"adding_software/deploying_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.
For more information, see the deploying section in the bot documentation.
Warning
Permission to trigger deployment of software installations must be granted to your GitHub account first!
See bot permissions for more information.
"},{"location":"adding_software/deploying_software/#merging-the-pull-request","title":"Merging the pull request","text":"You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.
Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.
Note
This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml
) that checks for missing installations, in the correct branch (for example 2023.06
) of the software-layer.
If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!
Warning
You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.
Ask for help in the #software-layer
channel of the EESSI Slack if needed!
"},{"location":"adding_software/deploying_software/#getting-help","title":"Getting help","text":"If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer
channel of the EESSI Slack.
"},{"location":"adding_software/opening_pr/","title":"Opening a pull request","text":"(for contributors)
To add software to EESSI, you should go through the semi-automatic software installation procedure by:
- 1) Making a pull request to the software-layer repository to (add or) update an easystack file that is used by EasyBuild to install software;
- 2) Instructing the bot to build the software on all supported CPU microarchitectures;
- 3) Instructing the bot to deploy the built software for ingestion into the EESSI repository;
- 4) Merging the pull request once CI indicates that the software has been ingested.
Warning
Make sure you are also aware of our contribution policy when adding software to EESSI.
"},{"location":"adding_software/opening_pr/#preparation","title":"Preparation","text":"Before you can make a pull request to the software-layer, you should fork the repository in your GitHub account.
For the remainder of these instructions, we assume that your GitHub account is @koala
.
Note
Don't forget to replace koala
with the name of your GitHub account in the commands below!
1) Clone the EESSI/software-layer repository:
mkdir EESSI\ncd EESSI\ngit clone https://github.com/EESSI/software-layer\ncd software-layer\n
2) Add your fork as a remote
git remote add koala git@github.com:koala/software-layer.git\n
3) Check out the branch that corresponds to the version of EESSI repository you want to add software to, for example 2023.06-software.eessi.io
:
git checkout 2023.06-software.eessi.io\n
Note
The commands above only need to be run once, to prepare your setup for making pull requests.
"},{"location":"adding_software/opening_pr/#software_layer_pull_request","title":"Creating a pull request","text":"1) Make sure that your 2023.06-software.eessi.io
branch in the checkout of the EESSI/software-layer
repository is up-to-date
cd EESSI/software-layer\ngit checkout 2023.06-software.eessi.io \ngit pull origin 2023.06-software.eessi.io \n
2) Create a new branch (use a sensible name, not example_branch
as below), and check it out
git checkout -b example_branch\n
3) Determine the correct easystack file to change, and add one or more lines to it that specify which easyconfigs should be installed
echo ' - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\n
Note that the naming scheme is standardized and should be eessi-<eessi_version>-eb-<eb_version>-<toolchain_version>.yml
. See the official EasyBuild documentation on easystack files for more information on the syntax. 4) Stage and commit the changes into your your branch with a sensible message
git add easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\ngit commit -m \"{2023.06}[GCC/12.3.0] example 1.2.3\"\n
5) Push your branch to your fork of the software-layer repository
git push koala example_branch\n
6) Go to the GitHub web interface to open your pull request, or use the helpful link that should show up in the output of the git push
command.
Make sure you target the correct branch: the one that corresponds to the version of EESSI you want to add software to (like 2023.06-software.eessi.io
).
If all goes well, one or more bots should almost instantly create a comment in your pull request with an overview of how it is configured - you will need this information when providing build instructions.
"},{"location":"adding_software/opening_pr/#rebuilding-software","title":"Rebuilding software","text":"We typically do not rebuild software, since (strictly speaking) this breaks reproducibility for anyone using the software. However, there are certain situations in which it is difficult or impossible to avoid.
To do a rebuild, you add the software you want to rebuild to a dedicated easystack file in the rebuilds
directory. Use the following naming convention: YYYYMMDD-eb-<EB_VERSION>-<APPLICATION_NAME>-<APPLICATION_VERSION>-<SHORT_DESCRIPTION>.yml
, where YYYYMMDD
is the opening date of your PR. E.g. 2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml
was added in a PR on the 6th of May 2024 and used to rebuild CUDA-12.1.1 using EasyBuild 4.9.1 to resolve an issue with some runtime libraries missing from the initial CUDA 12.1.1 installation.
At the top of your easystack file, please use comments to include a short description, and make sure to include any relevant links to related issues (e.g. from the GitHub repositories of EESSI, EasyBuild, or the software you are rebuilding).
As an example, consider the full easystack file (2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml
) used for the aforementioned CUDA rebuild:
# 2024.05.06\n# Original matching of files we could ship was not done correctly. We were\n# matching the basename for files (e.g., libcudart.so from libcudart.so.12)\n# rather than the name stub (libcudart)\n# See https://github.com/EESSI/software-layer/pull/559\neasyconfigs:\n - CUDA-12.1.1.eb:\n options:\n accept-eula-for: CUDA\n
By separating rebuilds in dedicated files, we still maintain a complete software bill of materials: it is transparent what got rebuilt, for which reason, and when.
"},{"location":"adding_software/overview/","title":"Overview of adding software to EESSI","text":"We welcome contributions to the EESSI software stack. This page shows the procedure and provides links to the contribution policy and the technical details of making a contribution.
"},{"location":"adding_software/overview/#contribute-a-software-to-the-eessi-software-stack","title":"Contribute a software to the EESSI software stack","text":"\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n I(contributor) \n K(reviewer)\n A(Is there an EasyConfig for software) -->|No|B(Create an EasyConfig and contribute it to EasyBuild)\n A --> |Yes|D(Create a PR to software-layer)\n B --> C(Evaluate and merge pull request)\n C --> D\n D --> E(Review PR & trigger builds)\n E --> F(Debug build issue if needed)\n F --> G(Deploy tarballs to S3 bucket)\n G --> H(Ingest tarballs in EESSI by merging staging PRs)\n classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n class A,B,D,F,I blue\n click B \"https://easybuild.io/\"\n click D \"../opening_pr/\"\n click F \"../debugging_failed_builds/\"\n
"},{"location":"adding_software/overview/#contributing-a-reframe-test-to-the-eessi-test-suite","title":"Contributing a ReFrame test to the EESSI test suite","text":"Ideally, a contributor prepares a ReFrame test for the software to be added to the EESSI software stack.
\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n\n Z(Create ReFrame test & PR to tests-suite) --> Y(Review PR & run new test)\n Y --> W(Debug issue if needed) \n W --> V(Review PR if needed)\n V --> U(Merge PR)\n classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n class Z,W blue\n
"},{"location":"adding_software/overview/#more-about-adding-software-to-eessi","title":"More about adding software to EESSI","text":" - Contribution policy
- Opening a pull request (for contributors)
- Building software (for maintainers)
- Debugging failed builds (for contributors + maintainers)
- Deploying software (for maintainers)
If you need help with adding software to EESSI, please open a support request.
"},{"location":"available_software/overview/","title":"Available software (via modules)","text":"This table gives an overview of all the available software in EESSI per specific CPU target.
Name aarch64 x86_64 amd intel generic neoverse_n1 neoverse_v1 generic zen2 zen3 haswell skylake_avx512"},{"location":"available_software/detail/ALL/","title":"ALL","text":"A Load Balancing Library (ALL) aims to provide an easy way to include dynamicdomain-based load balancing into particle based simulation codes. The libraryis developed in the Simulation Laboratory Molecular Systems of the J\u00fclichSupercomputing Centre at Forschungszentrum J\u00fclich.
https://gitlab.jsc.fz-juelich.de/SLMS/loadbalancing
"},{"location":"available_software/detail/ALL/#available-modules","title":"Available modules","text":"The overview below shows which ALL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ALL, load one of these modules using a module load
command like:
module load ALL/0.9.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ALL/0.9.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/AOFlagger/","title":"AOFlagger","text":"The AOFlagger is a tool that can find and remove radio-frequency interference (RFI)in radio astronomical observations. It can make use of Lua scripts to make flagging strategies flexible,and the tools are applicable to a wide set of telescopes.
https://aoflagger.readthedocs.io/
"},{"location":"available_software/detail/AOFlagger/#available-modules","title":"Available modules","text":"The overview below shows which AOFlagger installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using AOFlagger, load one of these modules using a module load
command like:
module load AOFlagger/3.4.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 AOFlagger/3.4.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/ATK/","title":"ATK","text":"ATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.
https://developer.gnome.org/atk/
"},{"location":"available_software/detail/ATK/#available-modules","title":"Available modules","text":"The overview below shows which ATK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ATK, load one of these modules using a module load
command like:
module load ATK/2.38.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ATK/2.38.0-GCCcore-12.3.0 x x x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Abseil/","title":"Abseil","text":"Abseil is an open-source collection of C++ library code designed to augment theC++ standard library. The Abseil library code is collected from Google's ownC++ code base, has been extensively tested and used in production, and is thesame code we depend on in our daily coding lives.
https://abseil.io/
"},{"location":"available_software/detail/Abseil/#available-modules","title":"Available modules","text":"The overview below shows which Abseil installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Abseil, load one of these modules using a module load
command like:
module load Abseil/20230125.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Abseil/20230125.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Armadillo/","title":"Armadillo","text":"Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.
https://arma.sourceforge.net/
"},{"location":"available_software/detail/Armadillo/#available-modules","title":"Available modules","text":"The overview below shows which Armadillo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Armadillo, load one of these modules using a module load
command like:
module load Armadillo/12.8.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Armadillo/12.8.0-foss-2023b x x x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x x x"},{"location":"available_software/detail/BLIS/","title":"BLIS","text":"BLIS is a portable software framework for instantiating high-performanceBLAS-like dense linear algebra libraries.
https://github.com/flame/blis/
"},{"location":"available_software/detail/BLIS/#available-modules","title":"Available modules","text":"The overview below shows which BLIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BLIS, load one of these modules using a module load
command like:
module load BLIS/0.9.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLIS/0.9.0-GCC-13.2.0 x x x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/BWA/","title":"BWA","text":"Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.
http://bio-bwa.sourceforge.net/
"},{"location":"available_software/detail/BWA/#available-modules","title":"Available modules","text":"The overview below shows which BWA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BWA, load one of these modules using a module load
command like:
module load BWA/0.7.17-20220923-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BWA/0.7.17-20220923-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Bazel/","title":"Bazel","text":"Bazel is a build tool that builds code quickly and reliably.It is used to build the majority of Google's software.
https://bazel.io/
"},{"location":"available_software/detail/Bazel/#available-modules","title":"Available modules","text":"The overview below shows which Bazel installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Bazel, load one of these modules using a module load
command like:
module load Bazel/6.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bazel/6.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/","title":"BeautifulSoup","text":"Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping.
https://www.crummy.com/software/BeautifulSoup
"},{"location":"available_software/detail/BeautifulSoup/#available-modules","title":"Available modules","text":"The overview below shows which BeautifulSoup installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using BeautifulSoup, load one of these modules using a module load
command like:
module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/#beautifulsoup4122-gcccore-1230","title":"BeautifulSoup/4.12.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
BeautifulSoup-4.12.2, soupsieve-2.4.1
"},{"location":"available_software/detail/Bison/","title":"Bison","text":"Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.
https://www.gnu.org/software/bison
"},{"location":"available_software/detail/Bison/#available-modules","title":"Available modules","text":"The overview below shows which Bison installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Bison, load one of these modules using a module load
command like:
module load Bison/3.8.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bison/3.8.2-GCCcore-13.2.0 x x x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Boost.MPI/","title":"Boost.MPI","text":"Boost provides free peer-reviewed portable C++ source libraries.
https://www.boost.org/
"},{"location":"available_software/detail/Boost.MPI/#available-modules","title":"Available modules","text":"The overview below shows which Boost.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost.MPI, load one of these modules using a module load
command like:
module load Boost.MPI/1.82.0-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.MPI/1.82.0-gompi-2023a x x x x x x x x Boost.MPI/1.81.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/Boost.Python/","title":"Boost.Python","text":"Boost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.
https://boostorg.github.io/python
"},{"location":"available_software/detail/Boost.Python/#available-modules","title":"Available modules","text":"The overview below shows which Boost.Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost.Python, load one of these modules using a module load
command like:
module load Boost.Python/1.83.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.Python/1.83.0-GCC-13.2.0 x x x x x x x x"},{"location":"available_software/detail/Boost/","title":"Boost","text":"Boost provides free peer-reviewed portable C++ source libraries.
https://www.boost.org/
"},{"location":"available_software/detail/Boost/#available-modules","title":"Available modules","text":"The overview below shows which Boost installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Boost, load one of these modules using a module load
command like:
module load Boost/1.83.0-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost/1.83.0-GCC-13.2.0 x x x x x x x x Boost/1.82.0-GCC-12.3.0 x x x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Brotli/","title":"Brotli","text":"Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.The specification of the Brotli Compressed Data Format is defined in RFC 7932.
https://github.com/google/brotli
"},{"location":"available_software/detail/Brotli/#available-modules","title":"Available modules","text":"The overview below shows which Brotli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Brotli, load one of these modules using a module load
command like:
module load Brotli/1.1.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brotli/1.1.0-GCCcore-13.2.0 x x x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Brunsli/","title":"Brunsli","text":"Brunsli is a lossless JPEG repacking library.
https://github.com/google/brunsli/
"},{"location":"available_software/detail/Brunsli/#available-modules","title":"Available modules","text":"The overview below shows which Brunsli installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Brunsli, load one of these modules using a module load
command like:
module load Brunsli/0.1-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brunsli/0.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CDO/","title":"CDO","text":"CDO is a collection of command line Operators to manipulate and analyse Climate and NWP model Data.
https://code.zmaw.de/projects/cdo
"},{"location":"available_software/detail/CDO/#available-modules","title":"Available modules","text":"The overview below shows which CDO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CDO, load one of these modules using a module load
command like:
module load CDO/2.2.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CDO/2.2.2-gompi-2023b x x x x x x x x CDO/2.2.2-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/CFITSIO/","title":"CFITSIO","text":"CFITSIO is a library of C and Fortran subroutines for reading and writing data files inFITS (Flexible Image Transport System) data format.
https://heasarc.gsfc.nasa.gov/fitsio/
"},{"location":"available_software/detail/CFITSIO/#available-modules","title":"Available modules","text":"The overview below shows which CFITSIO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CFITSIO, load one of these modules using a module load
command like:
module load CFITSIO/4.3.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CFITSIO/4.3.1-GCCcore-13.2.0 x x x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CGAL/","title":"CGAL","text":"The goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.
https://www.cgal.org/
"},{"location":"available_software/detail/CGAL/#available-modules","title":"Available modules","text":"The overview below shows which CGAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CGAL, load one of these modules using a module load
command like:
module load CGAL/5.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CGAL/5.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/CMake/","title":"CMake","text":"CMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.
https://www.cmake.org
"},{"location":"available_software/detail/CMake/#available-modules","title":"Available modules","text":"The overview below shows which CMake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CMake, load one of these modules using a module load
command like:
module load CMake/3.27.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CMake/3.27.6-GCCcore-13.2.0 x x x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/CUDA-Samples/","title":"CUDA-Samples","text":"Samples for CUDA Developers which demonstrates features in CUDA Toolkit
https://github.com/NVIDIA/cuda-samples
"},{"location":"available_software/detail/CUDA-Samples/#available-modules","title":"Available modules","text":"The overview below shows which CUDA-Samples installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CUDA-Samples, load one of these modules using a module load
command like:
module load CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/CUDA/","title":"CUDA","text":"CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.
https://developer.nvidia.com/cuda-toolkit
"},{"location":"available_software/detail/CUDA/#available-modules","title":"Available modules","text":"The overview below shows which CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CUDA, load one of these modules using a module load
command like:
module load CUDA/12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA/12.1.1 x x x x x x x x"},{"location":"available_software/detail/Catch2/","title":"Catch2","text":"A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later
https://github.com/catchorg/Catch2
"},{"location":"available_software/detail/Catch2/#available-modules","title":"Available modules","text":"The overview below shows which Catch2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Catch2, load one of these modules using a module load
command like:
module load Catch2/2.13.9-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Catch2/2.13.9-GCCcore-13.2.0 x x x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Cbc/","title":"Cbc","text":"Cbc (Coin-or branch and cut) is an open-source mixed integer linear programmingsolver written in C++. It can be used as a callable library or using astand-alone executable.
https://github.com/coin-or/Cbc
"},{"location":"available_software/detail/Cbc/#available-modules","title":"Available modules","text":"The overview below shows which Cbc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Cbc, load one of these modules using a module load
command like:
module load Cbc/2.10.11-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cbc/2.10.11-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Cgl/","title":"Cgl","text":"The COIN-OR Cut Generation Library (Cgl) is a collection of cut generators thatcan be used with other COIN-OR packages that make use of cuts, such as, amongothers, the linear solver Clp or the mixed integer linear programming solversCbc or BCP. Cgl uses the abstract class OsiSolverInterface (see Osi) to use orcommunicate with a solver. It does not directly call a solver.
https://github.com/coin-or/Cgl
"},{"location":"available_software/detail/Cgl/#available-modules","title":"Available modules","text":"The overview below shows which Cgl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Cgl, load one of these modules using a module load
command like:
module load Cgl/0.60.8-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cgl/0.60.8-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Clp/","title":"Clp","text":"Clp (Coin-or linear programming) is an open-source linear programming solver.It is primarily meant to be used as a callable library, but a basic,stand-alone executable version is also available.
https://github.com/coin-or/Clp
"},{"location":"available_software/detail/Clp/#available-modules","title":"Available modules","text":"The overview below shows which Clp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Clp, load one of these modules using a module load
command like:
module load Clp/1.17.9-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Clp/1.17.9-foss-2023a x x x x x x x x"},{"location":"available_software/detail/CoinUtils/","title":"CoinUtils","text":"CoinUtils (Coin-OR Utilities) is an open-source collection of classes andfunctions that are generally useful to more than one COIN-OR project.
https://github.com/coin-or/CoinUtils
"},{"location":"available_software/detail/CoinUtils/#available-modules","title":"Available modules","text":"The overview below shows which CoinUtils installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using CoinUtils, load one of these modules using a module load
command like:
module load CoinUtils/2.11.10-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 CoinUtils/2.11.10-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/DB/","title":"DB","text":"Berkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.
https://www.oracle.com/technetwork/products/berkeleydb
"},{"location":"available_software/detail/DB/#available-modules","title":"Available modules","text":"The overview below shows which DB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using DB, load one of these modules using a module load
command like:
module load DB/18.1.40-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB/18.1.40-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/DP3/","title":"DP3","text":"DP3: streaming processing pipeline for radio interferometric data.
https://dp3.readthedocs.io/
"},{"location":"available_software/detail/DP3/#available-modules","title":"Available modules","text":"The overview below shows which DP3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using DP3, load one of these modules using a module load
command like:
module load DP3/6.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 DP3/6.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/Doxygen/","title":"Doxygen","text":"Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.
https://www.doxygen.org
"},{"location":"available_software/detail/Doxygen/#available-modules","title":"Available modules","text":"The overview below shows which Doxygen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Doxygen, load one of these modules using a module load
command like:
module load Doxygen/1.9.8-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Doxygen/1.9.8-GCCcore-13.2.0 x x x x x x x x Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ELPA/","title":"ELPA","text":"Eigenvalue SoLvers for Petaflop-Applications.
https://elpa.mpcdf.mpg.de/
"},{"location":"available_software/detail/ELPA/#available-modules","title":"Available modules","text":"The overview below shows which ELPA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ELPA, load one of these modules using a module load
command like:
module load ELPA/2022.05.001-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ELPA/2022.05.001-foss-2022b x x x x x x x x"},{"location":"available_software/detail/ESPResSo/","title":"ESPResSo","text":"A software package for performing and analyzing scientific Molecular Dynamics simulations.
https://espressomd.org/wordpress
"},{"location":"available_software/detail/ESPResSo/#available-modules","title":"Available modules","text":"The overview below shows which ESPResSo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ESPResSo, load one of these modules using a module load
command like:
module load ESPResSo/4.2.1-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ESPResSo/4.2.1-foss-2023a x x x x x x x x"},{"location":"available_software/detail/EasyBuild/","title":"EasyBuild","text":"EasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.
https://easybuilders.github.io/easybuild
"},{"location":"available_software/detail/EasyBuild/#available-modules","title":"Available modules","text":"The overview below shows which EasyBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using EasyBuild, load one of these modules using a module load
command like:
module load EasyBuild/4.9.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 EasyBuild/4.9.0 x x x x x x x x EasyBuild/4.8.2 x x x x x x x x"},{"location":"available_software/detail/Eigen/","title":"Eigen","text":"Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
https://eigen.tuxfamily.org
"},{"location":"available_software/detail/Eigen/#available-modules","title":"Available modules","text":"The overview below shows which Eigen installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Eigen, load one of these modules using a module load
command like:
module load Eigen/3.4.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Eigen/3.4.0-GCCcore-13.2.0 x x x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/EveryBeam/","title":"EveryBeam","text":"Library that provides the antenna response pattern for several instruments,such as LOFAR (and LOBES), SKA (OSKAR), MWA, JVLA, etc.
https://everybeam.readthedocs.io/
"},{"location":"available_software/detail/EveryBeam/#available-modules","title":"Available modules","text":"The overview below shows which EveryBeam installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using EveryBeam, load one of these modules using a module load
command like:
module load EveryBeam/0.5.2-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 EveryBeam/0.5.2-foss-2023b x x x x x x x x"},{"location":"available_software/detail/FFTW.MPI/","title":"FFTW.MPI","text":"FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.
https://www.fftw.org
"},{"location":"available_software/detail/FFTW.MPI/#available-modules","title":"Available modules","text":"The overview below shows which FFTW.MPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFTW.MPI, load one of these modules using a module load
command like:
module load FFTW.MPI/3.3.10-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW.MPI/3.3.10-gompi-2023b x x x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/FFTW/","title":"FFTW","text":"FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.
https://www.fftw.org
"},{"location":"available_software/detail/FFTW/#available-modules","title":"Available modules","text":"The overview below shows which FFTW installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFTW, load one of these modules using a module load
command like:
module load FFTW/3.3.10-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW/3.3.10-GCC-13.2.0 x x x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/FFmpeg/","title":"FFmpeg","text":"A complete, cross-platform solution to record, convert and stream audio and video.
https://www.ffmpeg.org/
"},{"location":"available_software/detail/FFmpeg/#available-modules","title":"Available modules","text":"The overview below shows which FFmpeg installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FFmpeg, load one of these modules using a module load
command like:
module load FFmpeg/6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFmpeg/6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/FlexiBLAS/","title":"FlexiBLAS","text":"FlexiBLAS is a wrapper library that enables the exchange of the BLAS and LAPACK implementationused by a program without recompiling or relinking it.
https://gitlab.mpi-magdeburg.mpg.de/software/flexiblas-release
"},{"location":"available_software/detail/FlexiBLAS/#available-modules","title":"Available modules","text":"The overview below shows which FlexiBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FlexiBLAS, load one of these modules using a module load
command like:
module load FlexiBLAS/3.3.1-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/FriBidi/","title":"FriBidi","text":"The Free Implementation of the Unicode Bidirectional Algorithm.
https://github.com/fribidi/fribidi
"},{"location":"available_software/detail/FriBidi/#available-modules","title":"Available modules","text":"The overview below shows which FriBidi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using FriBidi, load one of these modules using a module load
command like:
module load FriBidi/1.0.12-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GCC/","title":"GCC","text":"The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
https://gcc.gnu.org/
"},{"location":"available_software/detail/GCC/#available-modules","title":"Available modules","text":"The overview below shows which GCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GCC, load one of these modules using a module load
command like:
module load GCC/13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCC/13.2.0 x x x x x x x x GCC/12.3.0 x x x x x x x x GCC/12.2.0 x x x x x x x x"},{"location":"available_software/detail/GCCcore/","title":"GCCcore","text":"The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).
https://gcc.gnu.org/
"},{"location":"available_software/detail/GCCcore/#available-modules","title":"Available modules","text":"The overview below shows which GCCcore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GCCcore, load one of these modules using a module load
command like:
module load GCCcore/13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCCcore/13.2.0 x x x x x x x x GCCcore/12.3.0 x x x x x x x x GCCcore/12.2.0 x x x x x x x x"},{"location":"available_software/detail/GDAL/","title":"GDAL","text":"GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.
https://www.gdal.org
"},{"location":"available_software/detail/GDAL/#available-modules","title":"Available modules","text":"The overview below shows which GDAL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GDAL, load one of these modules using a module load
command like:
module load GDAL/3.6.2-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDAL/3.6.2-foss-2022b x x x x x x x x"},{"location":"available_software/detail/GDRCopy/","title":"GDRCopy","text":"A low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.
https://github.com/NVIDIA/gdrcopy
"},{"location":"available_software/detail/GDRCopy/#available-modules","title":"Available modules","text":"The overview below shows which GDRCopy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GDRCopy, load one of these modules using a module load
command like:
module load GDRCopy/2.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDRCopy/2.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GEOS/","title":"GEOS","text":"GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)
https://trac.osgeo.org/geos
"},{"location":"available_software/detail/GEOS/#available-modules","title":"Available modules","text":"The overview below shows which GEOS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GEOS, load one of these modules using a module load
command like:
module load GEOS/3.11.1-GCC-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GEOS/3.11.1-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GLPK/","title":"GLPK","text":"The GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.
https://www.gnu.org/software/glpk/
"},{"location":"available_software/detail/GLPK/#available-modules","title":"Available modules","text":"The overview below shows which GLPK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GLPK, load one of these modules using a module load
command like:
module load GLPK/5.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLPK/5.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GLib/","title":"GLib","text":"GLib is one of the base libraries of the GTK+ project
https://www.gtk.org/
"},{"location":"available_software/detail/GLib/#available-modules","title":"Available modules","text":"The overview below shows which GLib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GLib, load one of these modules using a module load
command like:
module load GLib/2.77.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLib/2.77.1-GCCcore-12.3.0 x x x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GMP/","title":"GMP","text":"GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.
https://gmplib.org/
"},{"location":"available_software/detail/GMP/#available-modules","title":"Available modules","text":"The overview below shows which GMP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GMP, load one of these modules using a module load
command like:
module load GMP/6.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GMP/6.2.1-GCCcore-12.3.0 x x x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GObject-Introspection/","title":"GObject-Introspection","text":"GObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.
https://gi.readthedocs.io/en/latest/
"},{"location":"available_software/detail/GObject-Introspection/#available-modules","title":"Available modules","text":"The overview below shows which GObject-Introspection installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GObject-Introspection, load one of these modules using a module load
command like:
module load GObject-Introspection/1.76.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GSL/","title":"GSL","text":"The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.
https://www.gnu.org/software/gsl/
"},{"location":"available_software/detail/GSL/#available-modules","title":"Available modules","text":"The overview below shows which GSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GSL, load one of these modules using a module load
command like:
module load GSL/2.7-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GSL/2.7-GCC-13.2.0 x x x x x x x x GSL/2.7-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GTK3/","title":"GTK3","text":"GTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.
https://developer.gnome.org/gtk3/stable/
"},{"location":"available_software/detail/GTK3/#available-modules","title":"Available modules","text":"The overview below shows which GTK3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GTK3, load one of these modules using a module load
command like:
module load GTK3/3.24.37-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GTK3/3.24.37-GCCcore-12.3.0 x x x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Gdk-Pixbuf/","title":"Gdk-Pixbuf","text":"The Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.
https://docs.gtk.org/gdk-pixbuf/
"},{"location":"available_software/detail/Gdk-Pixbuf/#available-modules","title":"Available modules","text":"The overview below shows which Gdk-Pixbuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Gdk-Pixbuf, load one of these modules using a module load
command like:
module load Gdk-Pixbuf/2.42.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Ghostscript/","title":"Ghostscript","text":"Ghostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.
https://ghostscript.com
"},{"location":"available_software/detail/Ghostscript/#available-modules","title":"Available modules","text":"The overview below shows which Ghostscript installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Ghostscript, load one of these modules using a module load
command like:
module load Ghostscript/10.01.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/GitPython/","title":"GitPython","text":"GitPython is a python library used to interact with Git repositories
https://gitpython.readthedocs.org
"},{"location":"available_software/detail/GitPython/#available-modules","title":"Available modules","text":"The overview below shows which GitPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using GitPython, load one of these modules using a module load
command like:
module load GitPython/3.1.40-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 GitPython/3.1.40-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/GitPython/#gitpython3140-gcccore-1230","title":"GitPython/3.1.40-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
gitdb-4.0.11, GitPython-3.1.40, smmap-5.0.1
"},{"location":"available_software/detail/HDF/","title":"HDF","text":"HDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.
https://www.hdfgroup.org/products/hdf4/
"},{"location":"available_software/detail/HDF/#available-modules","title":"Available modules","text":"The overview below shows which HDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HDF, load one of these modules using a module load
command like:
module load HDF/4.2.15-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF/4.2.15-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/HDF5/","title":"HDF5","text":"HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.
https://portal.hdfgroup.org/display/support
"},{"location":"available_software/detail/HDF5/#available-modules","title":"Available modules","text":"The overview below shows which HDF5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HDF5, load one of these modules using a module load
command like:
module load HDF5/1.14.3-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF5/1.14.3-gompi-2023b x x x x x x x x HDF5/1.14.0-gompi-2023a x x x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/HarfBuzz/","title":"HarfBuzz","text":"HarfBuzz is an OpenType text shaping engine.
https://www.freedesktop.org/wiki/Software/HarfBuzz
"},{"location":"available_software/detail/HarfBuzz/#available-modules","title":"Available modules","text":"The overview below shows which HarfBuzz installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HarfBuzz, load one of these modules using a module load
command like:
module load HarfBuzz/5.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/HepMC3/","title":"HepMC3","text":"HepMC is a standard for storing Monte Carlo event data.
http://hepmc.web.cern.ch/hepmc/
"},{"location":"available_software/detail/HepMC3/#available-modules","title":"Available modules","text":"The overview below shows which HepMC3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using HepMC3, load one of these modules using a module load
command like:
module load HepMC3/3.2.6-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 HepMC3/3.2.6-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Highway/","title":"Highway","text":"Highway is a C++ library for SIMD (Single Instruction, Multiple Data), i.e. applying the sameoperation to 'lanes'.
https://github.com/google/highway
"},{"location":"available_software/detail/Highway/#available-modules","title":"Available modules","text":"The overview below shows which Highway installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Highway, load one of these modules using a module load
command like:
module load Highway/1.0.3-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Highway/1.0.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ICU/","title":"ICU","text":"ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.
https://icu.unicode.org
"},{"location":"available_software/detail/ICU/#available-modules","title":"Available modules","text":"The overview below shows which ICU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ICU, load one of these modules using a module load
command like:
module load ICU/74.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ICU/74.1-GCCcore-13.2.0 x x x x x x x x ICU/73.2-GCCcore-12.3.0 x x x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/IDG/","title":"IDG","text":"Image Domain Gridding (IDG) is a fast method for convolutional resampling (gridding/degridding)of radio astronomical data (visibilities). Direction dependent effects (DDEs) or A-tems can be appliedin the gridding process.The algorithm is described in \"Image Domain Gridding: a fast method for convolutional resampling of visibilities\",Van der Tol (2018).The implementation is described in \"Radio-astronomical imaging on graphics processors\", Veenboer (2020).Please cite these papers in publications using IDG.
https://idg.readthedocs.io/
"},{"location":"available_software/detail/IDG/#available-modules","title":"Available modules","text":"The overview below shows which IDG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using IDG, load one of these modules using a module load
command like:
module load IDG/1.2.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 IDG/1.2.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/IPython/","title":"IPython","text":"IPython provides a rich architecture for interactive computing with: Powerful interactive shells (terminal and Qt-based). A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into your own projects. Easy to use, high performance tools for parallel computing.
https://ipython.org/index.html
"},{"location":"available_software/detail/IPython/#available-modules","title":"Available modules","text":"The overview below shows which IPython installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using IPython, load one of these modules using a module load
command like:
module load IPython/8.14.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 IPython/8.14.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/IPython/#ipython8140-gcccore-1230","title":"IPython/8.14.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
asttokens-2.2.1, backcall-0.2.0, executing-1.2.0, ipython-8.14.0, jedi-0.19.0, matplotlib-inline-0.1.6, parso-0.8.3, pickleshare-0.7.5, prompt_toolkit-3.0.39, pure_eval-0.2.2, stack_data-0.6.2, traitlets-5.9.0
"},{"location":"available_software/detail/ImageMagick/","title":"ImageMagick","text":"ImageMagick is a software suite to create, edit, compose, or convert bitmap images
https://www.imagemagick.org/
"},{"location":"available_software/detail/ImageMagick/#available-modules","title":"Available modules","text":"The overview below shows which ImageMagick installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ImageMagick, load one of these modules using a module load
command like:
module load ImageMagick/7.1.1-15-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Imath/","title":"Imath","text":"Imath is a C++ and python library of 2D and 3D vector, matrix, and math operations for computer graphics
https://imath.readthedocs.io/en/latest/
"},{"location":"available_software/detail/Imath/#available-modules","title":"Available modules","text":"The overview below shows which Imath installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Imath, load one of these modules using a module load
command like:
module load Imath/3.1.6-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Imath/3.1.6-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/JasPer/","title":"JasPer","text":"The JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.
https://www.ece.uvic.ca/~frodo/jasper/
"},{"location":"available_software/detail/JasPer/#available-modules","title":"Available modules","text":"The overview below shows which JasPer installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JasPer, load one of these modules using a module load
command like:
module load JasPer/4.0.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JasPer/4.0.0-GCCcore-13.2.0 x x x x x x x x JasPer/4.0.0-GCCcore-12.3.0 x x x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Java/","title":"Java","text":""},{"location":"available_software/detail/Java/#available-modules","title":"Available modules","text":"The overview below shows which Java installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Java, load one of these modules using a module load
command like:
module load Java/11.0.20\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Java/11.0.20 x x x x x x x x Java/11(@Java/11.0.20) x x x x x x x x"},{"location":"available_software/detail/JsonCpp/","title":"JsonCpp","text":"JsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.
https://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html
"},{"location":"available_software/detail/JsonCpp/#available-modules","title":"Available modules","text":"The overview below shows which JsonCpp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JsonCpp, load one of these modules using a module load
command like:
module load JsonCpp/1.9.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/JupyterLab/","title":"JupyterLab","text":"JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook.
https://jupyter.org/
"},{"location":"available_software/detail/JupyterLab/#available-modules","title":"Available modules","text":"The overview below shows which JupyterLab installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JupyterLab, load one of these modules using a module load
command like:
module load JupyterLab/4.0.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/JupyterLab/#jupyterlab405-gcccore-1230","title":"JupyterLab/4.0.5-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
async-lru-2.0.4, json5-0.9.14, jupyter-lsp-2.2.0, jupyterlab-4.0.5, jupyterlab_server-2.24.0
"},{"location":"available_software/detail/JupyterNotebook/","title":"JupyterNotebook","text":"The Jupyter Notebook is the original web application for creating and sharing computational documents. It offers a simple, streamlined, document-centric experience.
https://jupyter.org/
"},{"location":"available_software/detail/JupyterNotebook/#available-modules","title":"Available modules","text":"The overview below shows which JupyterNotebook installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using JupyterNotebook, load one of these modules using a module load
command like:
module load JupyterNotebook/7.0.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LAME/","title":"LAME","text":"LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.
http://lame.sourceforge.net/
"},{"location":"available_software/detail/LAME/#available-modules","title":"Available modules","text":"The overview below shows which LAME installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LAME, load one of these modules using a module load
command like:
module load LAME/3.100-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAME/3.100-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LAMMPS/","title":"LAMMPS","text":"LAMMPS is a classical molecular dynamics code, and an acronymfor Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS haspotentials for solid-state materials (metals, semiconductors) and soft matter(biomolecules, polymers) and coarse-grained or mesoscopic systems. It can beused to model atoms or, more generically, as a parallel particle simulator atthe atomic, meso, or continuum scale. LAMMPS runs on single processors or inparallel using message-passing techniques and a spatial-decomposition of thesimulation domain. The code is designed to be easy to modify or extend with newfunctionality.
https://www.lammps.org
"},{"location":"available_software/detail/LAMMPS/#available-modules","title":"Available modules","text":"The overview below shows which LAMMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LAMMPS, load one of these modules using a module load
command like:
module load LAMMPS/2Aug2023_update2-foss-2023a-kokkos\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAMMPS/2Aug2023_update2-foss-2023a-kokkos x x x x x x x x"},{"location":"available_software/detail/LERC/","title":"LERC","text":"LERC is an open-source image or raster format which supports rapid encoding and decodingfor any pixel type (not just RGB or Byte). Users set the maximum compression error per pixel while encoding,so the precision of the original input image is preserved (within user defined error bounds).
https://github.com/Esri/lerc
"},{"location":"available_software/detail/LERC/#available-modules","title":"Available modules","text":"The overview below shows which LERC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LERC, load one of these modules using a module load
command like:
module load LERC/4.0.0-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LERC/4.0.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LHAPDF/","title":"LHAPDF","text":"Les Houches Parton Density FunctionLHAPDF is the standard tool for evaluating parton distribution functions (PDFs) in high-energy physics.
http://lhapdf.hepforge.org/
"},{"location":"available_software/detail/LHAPDF/#available-modules","title":"Available modules","text":"The overview below shows which LHAPDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LHAPDF, load one of these modules using a module load
command like:
module load LHAPDF/6.5.4-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LHAPDF/6.5.4-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/LLVM/","title":"LLVM","text":"The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation (\"LLVM IR\"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
https://llvm.org/
"},{"location":"available_software/detail/LLVM/#available-modules","title":"Available modules","text":"The overview below shows which LLVM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LLVM, load one of these modules using a module load
command like:
module load LLVM/16.0.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LLVM/16.0.6-GCCcore-12.3.0 x x x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LibTIFF/","title":"LibTIFF","text":"tiff: Library and tools for reading and writing TIFF data files
https://libtiff.gitlab.io/libtiff/
"},{"location":"available_software/detail/LibTIFF/#available-modules","title":"Available modules","text":"The overview below shows which LibTIFF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LibTIFF, load one of these modules using a module load
command like:
module load LibTIFF/4.6.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LittleCMS/","title":"LittleCMS","text":"Little CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.
https://www.littlecms.com/
"},{"location":"available_software/detail/LittleCMS/#available-modules","title":"Available modules","text":"The overview below shows which LittleCMS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LittleCMS, load one of these modules using a module load
command like:
module load LittleCMS/2.15-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LittleCMS/2.15-GCCcore-12.3.0 x x x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/LoopTools/","title":"LoopTools","text":"LoopTools is a package for evaluation of scalar and tensor one-loop integrals.It is based on the FF package by G.J. van Oldenborgh.
https://feynarts.de/looptools/
"},{"location":"available_software/detail/LoopTools/#available-modules","title":"Available modules","text":"The overview below shows which LoopTools installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using LoopTools, load one of these modules using a module load
command like:
module load LoopTools/2.15-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 LoopTools/2.15-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Lua/","title":"Lua","text":"Lua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.
https://www.lua.org/
"},{"location":"available_software/detail/Lua/#available-modules","title":"Available modules","text":"The overview below shows which Lua installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Lua, load one of these modules using a module load
command like:
module load Lua/5.4.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Lua/5.4.6-GCCcore-13.2.0 x x x x x x x x Lua/5.4.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MDI/","title":"MDI","text":"The MolSSI Driver Interface (MDI) project provides a standardized API for fast, on-the-fly communication between computational chemistry codes. This greatly simplifies the process of implementing methods that require the cooperation of multiple software packages and enables developers to write a single implementation that works across many different codes. The API is sufficiently general to support a wide variety of techniques, including QM/MM, ab initio MD, machine learning, advanced sampling, and path integral MD, while also being straightforwardly extensible. Communication between codes is handled by the MDI Library, which enables tight coupling between codes using either the MPI or TCP/IP methods.
https://github.com/MolSSI-MDI/MDI_Library
"},{"location":"available_software/detail/MDI/#available-modules","title":"Available modules","text":"The overview below shows which MDI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MDI, load one of these modules using a module load
command like:
module load MDI/1.4.26-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDI/1.4.26-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/METIS/","title":"METIS","text":"METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
http://glaros.dtc.umn.edu/gkhome/metis/metis/overview
"},{"location":"available_software/detail/METIS/#available-modules","title":"Available modules","text":"The overview below shows which METIS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using METIS, load one of these modules using a module load
command like:
module load METIS/5.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 METIS/5.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MPC/","title":"MPC","text":"Gnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.
http://www.multiprecision.org/
"},{"location":"available_software/detail/MPC/#available-modules","title":"Available modules","text":"The overview below shows which MPC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MPC, load one of these modules using a module load
command like:
module load MPC/1.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPC/1.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MPFR/","title":"MPFR","text":"The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.
https://www.mpfr.org
"},{"location":"available_software/detail/MPFR/#available-modules","title":"Available modules","text":"The overview below shows which MPFR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MPFR, load one of these modules using a module load
command like:
module load MPFR/4.2.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPFR/4.2.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/MUMPS/","title":"MUMPS","text":"A parallel sparse direct solver
https://graal.ens-lyon.fr/MUMPS/
"},{"location":"available_software/detail/MUMPS/#available-modules","title":"Available modules","text":"The overview below shows which MUMPS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using MUMPS, load one of these modules using a module load
command like:
module load MUMPS/5.6.1-foss-2023a-metis\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 MUMPS/5.6.1-foss-2023a-metis x x x x x x x x"},{"location":"available_software/detail/Mako/","title":"Mako","text":"A super-fast templating language that borrows the best ideas from the existing templating languages
https://www.makotemplates.org
"},{"location":"available_software/detail/Mako/#available-modules","title":"Available modules","text":"The overview below shows which Mako installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Mako, load one of these modules using a module load
command like:
module load Mako/1.2.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mako/1.2.4-GCCcore-12.3.0 x x x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Mako/#mako124-gcccore-1230","title":"Mako/1.2.4-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
Mako-1.2.4, MarkupSafe-2.1.3
"},{"location":"available_software/detail/Mesa/","title":"Mesa","text":"Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.
https://www.mesa3d.org/
"},{"location":"available_software/detail/Mesa/#available-modules","title":"Available modules","text":"The overview below shows which Mesa installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Mesa, load one of these modules using a module load
command like:
module load Mesa/23.1.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mesa/23.1.4-GCCcore-12.3.0 x x x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Meson/","title":"Meson","text":"Meson is a cross-platform build system designed to be both as fast and as user friendly as possible.
https://mesonbuild.com
"},{"location":"available_software/detail/Meson/#available-modules","title":"Available modules","text":"The overview below shows which Meson installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Meson, load one of these modules using a module load
command like:
module load Meson/1.2.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Meson/1.2.3-GCCcore-13.2.0 x x x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NASM/","title":"NASM","text":"NASM: General-purpose x86 assembler
https://www.nasm.us/
"},{"location":"available_software/detail/NASM/#available-modules","title":"Available modules","text":"The overview below shows which NASM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NASM, load one of these modules using a module load
command like:
module load NASM/2.16.01-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NASM/2.16.01-GCCcore-13.2.0 x x x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NCCL/","title":"NCCL","text":"The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collectivecommunication primitives that are performance optimized for NVIDIA GPUs.
https://developer.nvidia.com/nccl
"},{"location":"available_software/detail/NCCL/#available-modules","title":"Available modules","text":"The overview below shows which NCCL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NCCL, load one of these modules using a module load
command like:
module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/NSPR/","title":"NSPR","text":"Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR
"},{"location":"available_software/detail/NSPR/#available-modules","title":"Available modules","text":"The overview below shows which NSPR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NSPR, load one of these modules using a module load
command like:
module load NSPR/4.35-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSPR/4.35-GCCcore-12.3.0 x x x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/NSS/","title":"NSS","text":"Network Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.
https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS
"},{"location":"available_software/detail/NSS/#available-modules","title":"Available modules","text":"The overview below shows which NSS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using NSS, load one of these modules using a module load
command like:
module load NSS/3.89.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSS/3.89.1-GCCcore-12.3.0 x x x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Nextflow/","title":"Nextflow","text":"Nextflow is a reactive workflow framework and a programming DSL that eases writing computational pipelines with complex data
https://www.nextflow.io/
"},{"location":"available_software/detail/Nextflow/#available-modules","title":"Available modules","text":"The overview below shows which Nextflow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Nextflow, load one of these modules using a module load
command like:
module load Nextflow/23.10.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Nextflow/23.10.0 x x x x x x x x"},{"location":"available_software/detail/Ninja/","title":"Ninja","text":"Ninja is a small build system with a focus on speed.
https://ninja-build.org/
"},{"location":"available_software/detail/Ninja/#available-modules","title":"Available modules","text":"The overview below shows which Ninja installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Ninja, load one of these modules using a module load
command like:
module load Ninja/1.11.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ninja/1.11.1-GCCcore-13.2.0 x x x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OSU-Micro-Benchmarks/","title":"OSU-Micro-Benchmarks","text":"OSU Micro-Benchmarks
https://mvapich.cse.ohio-state.edu/benchmarks/
"},{"location":"available_software/detail/OSU-Micro-Benchmarks/#available-modules","title":"Available modules","text":"The overview below shows which OSU-Micro-Benchmarks installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OSU-Micro-Benchmarks, load one of these modules using a module load
command like:
module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x x x OSU-Micro-Benchmarks/7.2-gompi-2023a-CUDA-12.1.1 x x x x x x x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/OpenBLAS/","title":"OpenBLAS","text":"OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
http://www.openblas.net/
"},{"location":"available_software/detail/OpenBLAS/#available-modules","title":"Available modules","text":"The overview below shows which OpenBLAS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenBLAS, load one of these modules using a module load
command like:
module load OpenBLAS/0.3.24-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenEXR/","title":"OpenEXR","text":"OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications
https://www.openexr.com/
"},{"location":"available_software/detail/OpenEXR/#available-modules","title":"Available modules","text":"The overview below shows which OpenEXR installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenEXR, load one of these modules using a module load
command like:
module load OpenEXR/3.1.5-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenFOAM/","title":"OpenFOAM","text":"OpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.
https://www.openfoam.org/
"},{"location":"available_software/detail/OpenFOAM/#available-modules","title":"Available modules","text":"The overview below shows which OpenFOAM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenFOAM, load one of these modules using a module load
command like:
module load OpenFOAM/11-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenFOAM/11-foss-2023a x x x x x x x x"},{"location":"available_software/detail/OpenJPEG/","title":"OpenJPEG","text":"OpenJPEG is an open-source JPEG 2000 codec written in C language. It has been developed in order to promote the use of JPEG 2000, a still-image compression standard from the Joint Photographic Experts Group (JPEG). Since may 2015, it is officially recognized by ISO/IEC and ITU-T as a JPEG 2000 Reference Software.
https://www.openjpeg.org/
"},{"location":"available_software/detail/OpenJPEG/#available-modules","title":"Available modules","text":"The overview below shows which OpenJPEG installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenJPEG, load one of these modules using a module load
command like:
module load OpenJPEG/2.5.0-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenMPI/","title":"OpenMPI","text":"The Open MPI Project is an open source MPI-3 implementation.
https://www.open-mpi.org/
"},{"location":"available_software/detail/OpenMPI/#available-modules","title":"Available modules","text":"The overview below shows which OpenMPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenMPI, load one of these modules using a module load
command like:
module load OpenMPI/4.1.6-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenMPI/4.1.6-GCC-13.2.0 x x x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/OpenPGM/","title":"OpenPGM","text":"OpenPGM is an open source implementation of the Pragmatic General Multicast (PGM) specification in RFC 3208 available at www.ietf.org. PGM is a reliable and scalable multicast protocol that enables receivers to detect loss, request retransmission of lost data, or notify an application of unrecoverable loss. PGM is a receiver-reliable protocol, which means the receiver is responsible for ensuring all data is received, absolving the sender of reception responsibility.
https://code.google.com/p/openpgm/
"},{"location":"available_software/detail/OpenPGM/#available-modules","title":"Available modules","text":"The overview below shows which OpenPGM installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenPGM, load one of these modules using a module load
command like:
module load OpenPGM/5.2.122-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/OpenSSL/","title":"OpenSSL","text":"The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolchain implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library.
https://www.openssl.org/
"},{"location":"available_software/detail/OpenSSL/#available-modules","title":"Available modules","text":"The overview below shows which OpenSSL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using OpenSSL, load one of these modules using a module load
command like:
module load OpenSSL/1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenSSL/1.1 x x x x x x x x"},{"location":"available_software/detail/Osi/","title":"Osi","text":"Osi (Open Solver Interface) provides an abstract base class to a generic linearprogramming (LP) solver, along with derived classes for specific solvers. Manyapplications may be able to use the Osi to insulate themselves from a specificLP solver. That is, programs written to the OSI standard may be linked to anysolver with an OSI interface and should produce correct results. The OSI hasbeen significantly extended compared to its first incarnation. Currently, theOSI supports linear programming solvers and has rudimentary support for integerprogramming.
https://github.com/coin-or/Osi
"},{"location":"available_software/detail/Osi/#available-modules","title":"Available modules","text":"The overview below shows which Osi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Osi, load one of these modules using a module load
command like:
module load Osi/0.108.9-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Osi/0.108.9-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PCRE/","title":"PCRE","text":"The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
https://www.pcre.org/
"},{"location":"available_software/detail/PCRE/#available-modules","title":"Available modules","text":"The overview below shows which PCRE installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PCRE, load one of these modules using a module load
command like:
module load PCRE/8.45-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE/8.45-GCCcore-13.2.0 x x x x x x x x PCRE/8.45-GCCcore-12.3.0 x x x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PCRE2/","title":"PCRE2","text":"The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
https://www.pcre.org/
"},{"location":"available_software/detail/PCRE2/#available-modules","title":"Available modules","text":"The overview below shows which PCRE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PCRE2, load one of these modules using a module load
command like:
module load PCRE2/10.42-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE2/10.42-GCCcore-12.3.0 x x x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PGPLOT/","title":"PGPLOT","text":"The PGPLOT Graphics Subroutine Library is a Fortran- or C-callable,device-independent graphics package for making simple scientific graphs. It is intendedfor making graphical images of publication quality with minimum effort on the part of the user. For most applications, the program can be device-independent, and the outputcan be directed to the appropriate device at run time.
https://sites.astro.caltech.edu/~tjp/pgplot/
"},{"location":"available_software/detail/PGPLOT/#available-modules","title":"Available modules","text":"The overview below shows which PGPLOT installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PGPLOT, load one of these modules using a module load
command like:
module load PGPLOT/5.2.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PGPLOT/5.2.2-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/PLUMED/","title":"PLUMED","text":"PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.
https://www.plumed.org
"},{"location":"available_software/detail/PLUMED/#available-modules","title":"Available modules","text":"The overview below shows which PLUMED installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PLUMED, load one of these modules using a module load
command like:
module load PLUMED/2.9.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLUMED/2.9.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PLY/","title":"PLY","text":"PLY is yet another implementation of lex and yacc for Python.
https://www.dabeaz.com/ply/
"},{"location":"available_software/detail/PLY/#available-modules","title":"Available modules","text":"The overview below shows which PLY installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PLY, load one of these modules using a module load
command like:
module load PLY/3.11-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLY/3.11-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PMIx/","title":"PMIx","text":"Process Management for Exascale EnvironmentsPMI Exascale (PMIx) represents an attempt toprovide an extended version of the PMI standard specifically designedto support clusters up to and including exascale sizes. The overallobjective of the project is not to branch the existing pseudo-standarddefinitions - in fact, PMIx fully supports both of the existing PMI-1and PMI-2 APIs - but rather to (a) augment and extend those APIs toeliminate some current restrictions that impact scalability, and (b)provide a reference implementation of the PMI-server that demonstratesthe desired level of scalability.
https://pmix.org/
"},{"location":"available_software/detail/PMIx/#available-modules","title":"Available modules","text":"The overview below shows which PMIx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PMIx, load one of these modules using a module load
command like:
module load PMIx/4.2.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PMIx/4.2.6-GCCcore-13.2.0 x x x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/PROJ/","title":"PROJ","text":"Program proj is a standard Unix filter function which convertsgeographic longitude and latitude coordinates into cartesian coordinates
https://proj.org
"},{"location":"available_software/detail/PROJ/#available-modules","title":"Available modules","text":"The overview below shows which PROJ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PROJ, load one of these modules using a module load
command like:
module load PROJ/9.3.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PROJ/9.3.1-GCCcore-13.2.0 x x x x x x x x PROJ/9.2.0-GCCcore-12.3.0 x x x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Pango/","title":"Pango","text":"Pango is a library for laying out and rendering of text, with an emphasis on internationalization.Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in thecontext of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.
https://www.pango.org/
"},{"location":"available_software/detail/Pango/#available-modules","title":"Available modules","text":"The overview below shows which Pango installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pango, load one of these modules using a module load
command like:
module load Pango/1.50.14-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pango/1.50.14-GCCcore-12.3.0 x x x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ParaView/","title":"ParaView","text":"ParaView is a scientific parallel visualizer.
https://www.paraview.org
"},{"location":"available_software/detail/ParaView/#available-modules","title":"Available modules","text":"The overview below shows which ParaView installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ParaView, load one of these modules using a module load
command like:
module load ParaView/5.11.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParaView/5.11.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Perl/","title":"Perl","text":"Larry Wall's Practical Extraction and Report LanguageIncludes a small selection of extra CPAN packages for core functionality.
https://www.perl.org/
"},{"location":"available_software/detail/Perl/#available-modules","title":"Available modules","text":"The overview below shows which Perl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Perl, load one of these modules using a module load
command like:
module load Perl/5.38.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl/5.38.0-GCCcore-13.2.0 x x x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Perl/#perl5380-gcccore-1320","title":"Perl/5.38.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21
"},{"location":"available_software/detail/Perl/#perl5361-gcccore-1230","title":"Perl/5.36.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21
"},{"location":"available_software/detail/Perl/#perl5360-gcccore-1220","title":"Perl/5.36.0-GCCcore-12.2.0","text":"This is a list of extensions included in the module:
Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.334, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.007, Business::ISBN::Data-20210112.006, Canary::Stability-2013, Capture::Tiny-0.48, Carp-1.50, Carp::Clan-6.08, Carp::Heavy-1.50, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Clone-0.45, Clone::Choose-0.010, common::sense-3.75, Config::General-2.65, Config::INI-0.027, Config::MVP-2.200012, Config::Simple-4.58, Config::Tiny-2.28, constant-1.33, CPAN::Meta::Check-0.014, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper-2.183, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.112, Data::Section-0.200007, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.58, DateTime::Locale-1.36, DateTime::TimeZone-2.53, DateTime::Tiny-1.07, DBD::CSV-0.59, DBD::SQLite-1.70, DBI-1.643, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.025, Email::Date::Format-1.005, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter-5.74, Exporter::Declare-0.114, Exporter::Tiny-1.004000, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.64, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::Path-2.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.013, File::Spec-3.75, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long-2.52, Getopt::Long::Descriptive-0.110, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9725, Graph::ReadWrite-2.10, Hash::Merge-0.302, Heap-0.80, HTML::Entities::Interpolate-1.10, HTML::Form-6.10, HTML::Parser-3.78, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::Cookies-6.10, HTTP::Daemon-6.14, HTTP::Date-6.05, HTTP::Negotiate-6.01, HTTP::Request-6.37, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::HTML-1.004, IO::Socket::SSL-2.075, IO::String-1.08, IO::Stringy-2.113, IO::Tty-1.16, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.09, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.58, List::Util-1.63, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.70, Log::Dispatchouli-2.023, Log::Handler-0.90, Log::Log4perl-1.56, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.33, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.67, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.879, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-1.013.1, MIME::Lite-3.033, MIME::Types-2.22, Mixin::Linewise::Readers-0.110, Mock::Quick-1.111, Module::Build-0.4231, Module::Build::Tiny-0.039, Module::Build::XSUtil-0.19, Module::CoreList-5.20220820, Module::Implementation-0.09, Module::Install-1.19, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000037, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005004, Moose-2.2201, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.006, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.201, MooseX::Types-0.50, MooseX::Types::Perl-0.101343, Mouse-v2.5.10, Mozilla::CA-20211001, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.14, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.17, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.30, Params::ValidationCompiler-0.30, parent-0.238, Parse::RecDescent-1.967015, Path::Tiny-0.124, PDF::API2-2.043, Perl::OSType-1.010, PerlIO::utf8_strict-0.009, Pod::Elemental-0.103005, Pod::Escapes-1.07, Pod::Eventual-0.094002, Pod::LaTeX-0.61, Pod::Man-4.14, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.43, Pod::Weaver-4.018, Readonly-2.05, Regexp::Common-2017060201, Role::HasMessage-0.006, Role::Identifiable::HasIdent-0.008, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104002, Specio-0.48, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Flogger-1.101245, String::Print-0.94, String::RewritePrefix-0.008, String::Truncate-1.100602, Sub::Exporter-0.988, Sub::Exporter::ForMethods-0.100054, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.928, Sub::Name-0.26, Sub::Quote-2.006006, Sub::Uplevel-0.2800, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Unknown-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.42, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000145, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.130, Test::Differences-0.69, Test::Exception-0.43, Test::Fatal-0.016, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More-1.302191, Test::More::UTF8-0.05, Test::Most-0.37, Test::Needs-0.002009, Test::NoWarnings-1.06, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302191, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.20, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.48, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::ParseWords-3.31, Text::Soundex-3.05, Text::Table-1.134, Text::Template-1.61, Thread::Queue-3.13, Throwable-1.000, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.30, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Types::Serialiser-1.01, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, URI-5.12, URI::Escape-5.12, Variable::Magic-0.62, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.73
"},{"location":"available_software/detail/Pillow-SIMD/","title":"Pillow-SIMD","text":"Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
https://github.com/uploadcare/pillow-simd
"},{"location":"available_software/detail/Pillow-SIMD/#available-modules","title":"Available modules","text":"The overview below shows which Pillow-SIMD installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pillow-SIMD, load one of these modules using a module load
command like:
module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Pillow/","title":"Pillow","text":"Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.
https://pillow.readthedocs.org/
"},{"location":"available_software/detail/Pillow/#available-modules","title":"Available modules","text":"The overview below shows which Pillow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pillow, load one of these modules using a module load
command like:
module load Pillow/10.2.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow/10.2.0-GCCcore-13.2.0 x x x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Pint/","title":"Pint","text":"Pint is a Python package to define, operate andmanipulate physical quantities: the product of a numerical value and aunit of measurement. It allows arithmetic operations between them andconversions from and to different units.
https://github.com/hgrecco/pint
"},{"location":"available_software/detail/Pint/#available-modules","title":"Available modules","text":"The overview below shows which Pint installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Pint, load one of these modules using a module load
command like:
module load Pint/0.23-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pint/0.23-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PuLP/","title":"PuLP","text":"PuLP is an LP modeler written in Python. PuLP can generate MPS or LP files andcall GLPK, COIN-OR CLP/CBC, CPLEX, GUROBI, MOSEK, XPRESS, CHOCO, MIPCL, SCIP tosolve linear problems.
https://github.com/coin-or/pulp
"},{"location":"available_software/detail/PuLP/#available-modules","title":"Available modules","text":"The overview below shows which PuLP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PuLP, load one of these modules using a module load
command like:
module load PuLP/2.8.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PuLP/2.8.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/","title":"PyQt-builder","text":"PyQt-builder is the PEP 517 compliant build system for PyQt and projects that extend PyQt. It extends the SIP build system and uses Qt\u2019s qmake to perform the actual compilation and installation of extension modules.
http://www.example.com
"},{"location":"available_software/detail/PyQt-builder/#available-modules","title":"Available modules","text":"The overview below shows which PyQt-builder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyQt-builder, load one of these modules using a module load
command like:
module load PyQt-builder/1.15.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt-builder/1.15.4-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/#pyqt-builder1154-gcccore-1230","title":"PyQt-builder/1.15.4-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
PyQt-builder-1.15.4
"},{"location":"available_software/detail/PyQt5/","title":"PyQt5","text":"PyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company.This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company\u2019s Qt WebEngine framework.
https://www.riverbankcomputing.com/software/pyqt
"},{"location":"available_software/detail/PyQt5/#available-modules","title":"Available modules","text":"The overview below shows which PyQt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyQt5, load one of these modules using a module load
command like:
module load PyQt5/5.15.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt5/5.15.10-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyTorch/","title":"PyTorch","text":"Tensors and Dynamic neural networks in Python with strong GPU acceleration.PyTorch is a deep learning framework that puts Python first.
https://pytorch.org/
"},{"location":"available_software/detail/PyTorch/#available-modules","title":"Available modules","text":"The overview below shows which PyTorch installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyTorch, load one of these modules using a module load
command like:
module load PyTorch/2.1.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyTorch/2.1.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/PyYAML/","title":"PyYAML","text":"PyYAML is a YAML parser and emitter for the Python programming language.
https://github.com/yaml/pyyaml
"},{"location":"available_software/detail/PyYAML/#available-modules","title":"Available modules","text":"The overview below shows which PyYAML installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyYAML, load one of these modules using a module load
command like:
module load PyYAML/6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyYAML/6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/PyZMQ/","title":"PyZMQ","text":"Python bindings for ZeroMQ
https://www.zeromq.org/bindings:python
"},{"location":"available_software/detail/PyZMQ/#available-modules","title":"Available modules","text":"The overview below shows which PyZMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using PyZMQ, load one of these modules using a module load
command like:
module load PyZMQ/25.1.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/","title":"Python-bundle-PyPI","text":"Bundle of Python packages from PyPI
https://python.org/
"},{"location":"available_software/detail/Python-bundle-PyPI/#available-modules","title":"Available modules","text":"The overview below shows which Python-bundle-PyPI installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Python-bundle-PyPI, load one of these modules using a module load
command like:
module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202310-gcccore-1320","title":"Python-bundle-PyPI/2023.10-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.13.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.6, bitarray-2.8.2, bitstring-4.1.2, blist-1.3.6, cachecontrol-0.13.1, cachy-0.3.0, certifi-2023.7.22, cffi-1.16.0, chardet-5.2.0, charset-normalizer-3.3.1, cleo-2.0.1, click-8.1.7, cloudpickle-3.0.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-3.0.4, decorator-5.1.1, distlib-0.3.7, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.6, ecdsa-0.18.0, editables-0.5, exceptiongroup-1.1.3, execnet-2.0.2, filelock-3.13.0, fsspec-2023.10.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.8.0, importlib_resources-6.1.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.3.0, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.3.2, jsonschema-4.17.3, keyring-24.2.0, keyrings.alt-5.0.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.1.0, more-itertools-10.1.0, msgpack-1.0.7, netaddr-0.9.0, netifaces-0.11.0, packaging-23.2, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.2, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, pluggy-1.3.0, pooch-1.8.0, psutil-5.9.6, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.19.0, pydevtool-0.3.0, Pygments-2.16.1, Pygments-2.16.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.1, pyrsistent-0.20.0, pytest-7.4.3, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3.post1, rapidfuzz-2.15.2, regex-2023.10.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.6.0, rich-click-1.7.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.4, simplegeneric-0.8.1, simplejson-3.19.2, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, sphinx-7.2.6, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib_applehelp-1.0.7, sphinxcontrib_devhelp-1.0.5, sphinxcontrib_htmlhelp-2.0.4, sphinxcontrib_qthelp-1.0.6, sphinxcontrib_serializinghtml-1.1.9, sphinxcontrib_websupport-1.2.6, tabulate-0.9.0, threadpoolctl-3.2.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.12.1, typing_extensions-4.8.0, ujson-5.8.0, urllib3-2.0.7, wcwidth-0.2.8, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.17.0
"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202306-gcccore-1230","title":"Python-bundle-PyPI/2023.06-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.12.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.5, bitstring-4.0.2, blist-1.3.6, CacheControl-0.12.14, cachy-0.3.0, certifi-2023.5.7, cffi-1.15.1, chardet-5.1.0, charset-normalizer-3.1.0, cleo-2.0.1, click-8.1.3, cloudpickle-2.2.1, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-0.29.35, decorator-5.1.1, distlib-0.3.6, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.5, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.1.1, execnet-1.9.0, filelock-3.12.2, fsspec-2023.6.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.7.0, importlib_resources-5.12.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.3, keyring-23.13.1, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.0.2, more-itertools-9.1.0, msgpack-1.0.5, netaddr-0.8.0, netifaces-0.11.0, packaging-23.1, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.1, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, pluggy-1.2.0, pooch-1.7.0, psutil-5.9.5, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.18.0, pydevtool-0.3.0, Pygments-2.15.1, Pygments-2.15.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.0, pyrsistent-0.19.3, pytest-7.4.0, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3, rapidfuzz-2.15.1, regex-2023.6.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.4.2, rich-click-1.6.1, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.0.post1, simplegeneric-0.8.1, simplejson-3.19.1, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-7.0.1, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.4, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.8, typing_extensions-4.6.3, ujson-5.8.0, urllib3-1.26.16, wcwidth-0.2.6, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.15.0
"},{"location":"available_software/detail/Python/","title":"Python","text":"Python is a programming language that lets you work more quickly and integrate your systems more effectively.
https://python.org/
"},{"location":"available_software/detail/Python/#available-modules","title":"Available modules","text":"The overview below shows which Python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Python, load one of these modules using a module load
command like:
module load Python/3.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python/3.11.5-GCCcore-13.2.0 x x x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x x x Python/3.10.8-GCCcore-12.2.0 x x x x x x x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x x x"},{"location":"available_software/detail/Python/#python3115-gcccore-1320","title":"Python/3.11.5-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
flit_core-3.9.0, pip-23.2.1, setuptools-68.2.2, wheel-0.41.2
"},{"location":"available_software/detail/Python/#python3113-gcccore-1230","title":"Python/3.11.3-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
flit_core-3.9.0, pip-23.1.2, setuptools-67.7.2, wheel-0.40.0
"},{"location":"available_software/detail/Python/#python3108-gcccore-1220","title":"Python/3.10.8-GCCcore-12.2.0","text":"This is a list of extensions included in the module:
alabaster-0.7.12, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-22.1.0, Babel-2.11.0, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.4, bcrypt-4.0.1, bitstring-3.1.9, blist-1.3.6, CacheControl-0.12.11, cachy-0.3.0, certifi-2022.9.24, cffi-1.15.1, chardet-5.0.0, charset-normalizer-2.1.1, cleo-1.0.0a5, click-8.1.3, clikit-0.6.2, cloudpickle-2.2.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.3.1, cryptography-38.0.3, Cython-0.29.32, decorator-5.1.1, distlib-0.3.6, docopt-0.6.2, docutils-0.19, doit-0.36.0, dulwich-0.20.50, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.0.1, execnet-1.9.0, filelock-3.8.0, flit-3.8.0, flit_core-3.8.0, flit_scm-1.7.0, fsspec-2022.11.0, future-0.18.2, glob2-0.7, hatch_fancy_pypi_readme-22.8.0, hatch_vcs-0.2.0, hatchling-1.11.1, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-5.0.0, importlib_resources-5.10.0, iniconfig-1.1.1, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.0, keyring-23.11.0, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, MarkupSafe-2.1.1, mock-4.0.3, more-itertools-9.0.0, msgpack-1.0.4, netaddr-0.8.0, netifaces-0.11.0, packaging-21.3, paramiko-2.12.0, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.10.1, pbr-5.11.0, pexpect-4.8.0, pip-22.3.1, pkginfo-1.8.3, platformdirs-2.5.3, pluggy-1.0.0, poetry-1.2.2, poetry-core-1.3.2, poetry_plugin_export-1.2.0, pooch-1.6.0, psutil-5.9.4, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.4.8, pycparser-2.21, pycryptodome-3.17, pydevtool-0.3.0, Pygments-2.13.0, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.0.9, pyrsistent-0.19.2, pytest-7.2.0, pytest-xdist-3.1.0, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2022.6, regex-2022.10.31, requests-2.28.1, requests-toolbelt-0.9.1, rich-13.1.0, rich-click-1.6.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, setuptools-63.4.3, setuptools-rust-1.5.2, setuptools_scm-7.0.5, shellingham-1.5.0, simplegeneric-0.8.1, simplejson-3.17.6, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-5.3.0, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.2, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.0, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.6, typing_extensions-4.4.0, ujson-5.5.0, urllib3-1.26.12, virtualenv-20.16.6, wcwidth-0.2.5, webencodings-0.5.1, wheel-0.38.4, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.10.0
"},{"location":"available_software/detail/Qhull/","title":"Qhull","text":"Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull.
http://www.qhull.org
"},{"location":"available_software/detail/Qhull/#available-modules","title":"Available modules","text":"The overview below shows which Qhull installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Qhull, load one of these modules using a module load
command like:
module load Qhull/2020.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qhull/2020.2-GCCcore-13.2.0 x x x x x x x x Qhull/2020.2-GCCcore-12.3.0 x x x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Qt5/","title":"Qt5","text":"Qt is a comprehensive cross-platform C++ application framework.
https://qt.io/
"},{"location":"available_software/detail/Qt5/#available-modules","title":"Available modules","text":"The overview below shows which Qt5 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Qt5, load one of these modules using a module load
command like:
module load Qt5/5.15.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qt5/5.15.10-GCCcore-12.3.0 x x x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/QuantumESPRESSO/","title":"QuantumESPRESSO","text":"Quantum ESPRESSO is an integrated suite of computer codesfor electronic-structure calculations and materials modeling at the nanoscale.It is based on density-functional theory, plane waves, and pseudopotentials(both norm-conserving and ultrasoft).
https://www.quantum-espresso.org
"},{"location":"available_software/detail/QuantumESPRESSO/#available-modules","title":"Available modules","text":"The overview below shows which QuantumESPRESSO installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using QuantumESPRESSO, load one of these modules using a module load
command like:
module load QuantumESPRESSO/7.2-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 QuantumESPRESSO/7.2-foss-2022b x x x x x x x x"},{"location":"available_software/detail/R/","title":"R","text":"R is a free software environment for statistical computing and graphics.
https://www.r-project.org/
"},{"location":"available_software/detail/R/#available-modules","title":"Available modules","text":"The overview below shows which R installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using R, load one of these modules using a module load
command like:
module load R/4.3.2-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 R/4.3.2-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/R/#r432-gfbf-2023a","title":"R/4.3.2-gfbf-2023a","text":"This is a list of extensions included in the module:
askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-8, brio-1.1.3, bslib-0.5.1, cachem-1.0.8, callr-3.7.3, cli-3.6.1, clipr-0.8.0, commonmark-1.9.0, compiler, cpp11-0.4.6, crayon-1.5.2, credentials-2.0.1, curl-5.1.0, datasets, desc-1.4.2, devtools-2.4.5, diffobj-0.3.5, digest-0.6.33, downlit-0.4.3, ellipsis-0.3.2, evaluate-0.23, fansi-1.0.5, fastmap-1.1.1, fontawesome-0.5.2, fs-1.6.3, gert-2.0.0, gh-1.4.0, gitcreds-0.1.2, glue-1.6.2, graphics, grDevices, grid, highr-0.10, htmltools-0.5.7, htmlwidgets-1.6.2, httpuv-1.6.12, httr-1.4.7, httr2-0.2.3, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.7, knitr-1.45, later-1.3.1, lifecycle-1.0.3, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.1.1, parallel, pillar-1.9.0, pkgbuild-1.4.2, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.3, praise-1.0.0, prettyunits-1.2.0, processx-3.8.2, profvis-0.3.8, promises-1.2.1, ps-1.7.5, purrr-1.0.2, R6-2.5.1, ragg-1.2.6, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.11, rematch2-2.1.2, remotes-2.4.2.1, rlang-1.1.2, rmarkdown-2.25, roxygen2-7.2.3, rprojroot-2.0.4, rstudioapi-0.15.0, rversions-2.1.2, sass-0.4.7, sessioninfo-1.2.2, shiny-1.7.5.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.7.12, stringr-1.5.0, sys-3.4.2, systemfonts-1.0.5, tcltk, testthat-3.2.0, textshaping-0.3.7, tibble-3.2.1, tinytex-0.48, tools, urlchecker-1.0.1, usethis-2.2.2, utf8-1.2.4, utils, vctrs-0.6.4, waldo-0.5.2, whisker-0.4.1, withr-2.5.2, xfun-0.41, xml2-1.3.5, xopen-1.0.0, xtable-1.8-4, yaml-2.3.7, zip-2.3.0
"},{"location":"available_software/detail/RE2/","title":"RE2","text":"RE2 is a fast, safe, thread-friendly alternative to backtracking regularexpression engines like those used in PCRE, Perl, and Python. It is a C++library.
https://github.com/google/re2
"},{"location":"available_software/detail/RE2/#available-modules","title":"Available modules","text":"The overview below shows which RE2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using RE2, load one of these modules using a module load
command like:
module load RE2/2023-08-01-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 RE2/2023-08-01-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/ReFrame/","title":"ReFrame","text":"ReFrame is a framework for writing regression tests for HPC systems.
https://github.com/reframe-hpc/reframe
"},{"location":"available_software/detail/ReFrame/#available-modules","title":"Available modules","text":"The overview below shows which ReFrame installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ReFrame, load one of these modules using a module load
command like:
module load ReFrame/4.3.3\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ReFrame/4.3.3 x x x x x x x x"},{"location":"available_software/detail/ReFrame/#reframe433","title":"ReFrame/4.3.3","text":"This is a list of extensions included in the module:
pip-21.3.1, reframe-4.3.3, wheel-0.37.1
"},{"location":"available_software/detail/Rivet/","title":"Rivet","text":"Rivet toolkit (Robust Independent Validation of Experiment and Theory)To use your own analysis you must append the path to RIVET_ANALYSIS_PATH
.
https://gitlab.com/hepcedar/rivet
"},{"location":"available_software/detail/Rivet/#available-modules","title":"Available modules","text":"The overview below shows which Rivet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Rivet, load one of these modules using a module load
command like:
module load Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6 x x x x x x x x"},{"location":"available_software/detail/Rust/","title":"Rust","text":"Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
https://www.rust-lang.org
"},{"location":"available_software/detail/Rust/#available-modules","title":"Available modules","text":"The overview below shows which Rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Rust, load one of these modules using a module load
command like:
module load Rust/1.73.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rust/1.73.0-GCCcore-13.2.0 x x x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/SCOTCH/","title":"SCOTCH","text":"Software package and libraries for sequential and parallel graph partitioning,static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.
https://www.labri.fr/perso/pelegrin/scotch/
"},{"location":"available_software/detail/SCOTCH/#available-modules","title":"Available modules","text":"The overview below shows which SCOTCH installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SCOTCH, load one of these modules using a module load
command like:
module load SCOTCH/7.0.3-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SCOTCH/7.0.3-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/SDL2/","title":"SDL2","text":"SDL: Simple DirectMedia Layer, a cross-platform multimedia library
https://www.libsdl.org/
"},{"location":"available_software/detail/SDL2/#available-modules","title":"Available modules","text":"The overview below shows which SDL2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SDL2, load one of these modules using a module load
command like:
module load SDL2/2.28.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SDL2/2.28.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/SIP/","title":"SIP","text":"SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries.
http://www.riverbankcomputing.com/software/sip/
"},{"location":"available_software/detail/SIP/#available-modules","title":"Available modules","text":"The overview below shows which SIP installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SIP, load one of these modules using a module load
command like:
module load SIP/6.8.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIP/6.8.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/SQLite/","title":"SQLite","text":"SQLite: SQL Database Engine in a C Library
https://www.sqlite.org/
"},{"location":"available_software/detail/SQLite/#available-modules","title":"Available modules","text":"The overview below shows which SQLite installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SQLite, load one of these modules using a module load
command like:
module load SQLite/3.43.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLite/3.43.1-GCCcore-13.2.0 x x x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ScaFaCoS/","title":"ScaFaCoS","text":"ScaFaCoS is a library of scalable fast coulomb solvers.
http://www.scafacos.de/
"},{"location":"available_software/detail/ScaFaCoS/#available-modules","title":"Available modules","text":"The overview below shows which ScaFaCoS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ScaFaCoS, load one of these modules using a module load
command like:
module load ScaFaCoS/1.0.4-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaFaCoS/1.0.4-foss-2023a - - - x x x x x"},{"location":"available_software/detail/ScaLAPACK/","title":"ScaLAPACK","text":"The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.
https://www.netlib.org/scalapack/
"},{"location":"available_software/detail/ScaLAPACK/#available-modules","title":"Available modules","text":"The overview below shows which ScaLAPACK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ScaLAPACK, load one of these modules using a module load
command like:
module load ScaLAPACK/2.2.0-gompi-2023b-fb\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x x x"},{"location":"available_software/detail/SciPy-bundle/","title":"SciPy-bundle","text":"Bundle of Python packages for scientific software
https://python.org/
"},{"location":"available_software/detail/SciPy-bundle/#available-modules","title":"Available modules","text":"The overview below shows which SciPy-bundle installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using SciPy-bundle, load one of these modules using a module load
command like:
module load SciPy-bundle/2023.11-gfbf-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciPy-bundle/2023.11-gfbf-2023b x x x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x x x"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202311-gfbf-2023b","title":"SciPy-bundle/2023.11-gfbf-2023b","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.1, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.7, numpy-1.26.2, pandas-2.1.3, ply-3.11, pythran-0.14.0, scipy-1.11.4, tzdata-2023.3, versioneer-0.29
"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202307-gfbf-2023a","title":"SciPy-bundle/2023.07-gfbf-2023a","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.0, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.4, numpy-1.25.1, pandas-2.0.3, ply-3.11, pythran-0.13.1, scipy-1.11.1, tzdata-2023.3, versioneer-0.29
"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202302-gfbf-2022b","title":"SciPy-bundle/2023.02-gfbf-2022b","text":"This is a list of extensions included in the module:
beniget-0.4.1, Bottleneck-1.3.5, deap-1.3.3, gast-0.5.3, mpmath-1.2.1, numexpr-2.8.4, numpy-1.24.2, pandas-1.5.3, ply-3.11, pythran-0.12.1, scipy-1.10.1
"},{"location":"available_software/detail/Szip/","title":"Szip","text":"Szip compression software, providing lossless compression of scientific data
https://www.hdfgroup.org/doc_resource/SZIP/
"},{"location":"available_software/detail/Szip/#available-modules","title":"Available modules","text":"The overview below shows which Szip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Szip, load one of these modules using a module load
command like:
module load Szip/2.1.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Szip/2.1.1-GCCcore-13.2.0 x x x x x x x x Szip/2.1.1-GCCcore-12.3.0 x x x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Tcl/","title":"Tcl","text":"Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.
https://www.tcl.tk/
"},{"location":"available_software/detail/Tcl/#available-modules","title":"Available modules","text":"The overview below shows which Tcl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tcl, load one of these modules using a module load
command like:
module load Tcl/8.6.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tcl/8.6.13-GCCcore-13.2.0 x x x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/TensorFlow/","title":"TensorFlow","text":"An open-source software library for Machine Intelligence
https://www.tensorflow.org/
"},{"location":"available_software/detail/TensorFlow/#available-modules","title":"Available modules","text":"The overview below shows which TensorFlow installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using TensorFlow, load one of these modules using a module load
command like:
module load TensorFlow/2.13.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 TensorFlow/2.13.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/TensorFlow/#tensorflow2130-foss-2023a","title":"TensorFlow/2.13.0-foss-2023a","text":"This is a list of extensions included in the module:
absl-py-1.4.0, astor-0.8.1, astunparse-1.6.3, cachetools-5.3.1, google-auth-2.22.0, google-auth-oauthlib-1.0.0, google-pasta-0.2.0, grpcio-1.57.0, gviz-api-1.10.0, keras-2.13.1, Markdown-3.4.4, oauthlib-3.2.2, opt-einsum-3.3.0, portpicker-1.5.2, pyasn1-modules-0.3.0, requests-oauthlib-1.3.1, rsa-4.9, tblib-2.0.0, tensorboard-2.13.0, tensorboard-data-server-0.7.1, tensorboard-plugin-profile-2.13.1, tensorboard-plugin-wit-1.8.1, TensorFlow-2.13.0, tensorflow-estimator-2.13.0, termcolor-2.3.0, Werkzeug-2.3.7, wrapt-1.15.0
"},{"location":"available_software/detail/Tk/","title":"Tk","text":"Tk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.
https://www.tcl.tk/
"},{"location":"available_software/detail/Tk/#available-modules","title":"Available modules","text":"The overview below shows which Tk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tk, load one of these modules using a module load
command like:
module load Tk/8.6.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tk/8.6.13-GCCcore-13.2.0 x x x x x x x x Tk/8.6.13-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Tkinter/","title":"Tkinter","text":"Tkinter module, built with the Python buildsystem
https://python.org/
"},{"location":"available_software/detail/Tkinter/#available-modules","title":"Available modules","text":"The overview below shows which Tkinter installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Tkinter, load one of these modules using a module load
command like:
module load Tkinter/3.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tkinter/3.11.5-GCCcore-13.2.0 x x x x x x x x Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/UCC-CUDA/","title":"UCC-CUDA","text":"UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.This module adds the UCC CUDA support.
https://www.openucx.org/
"},{"location":"available_software/detail/UCC-CUDA/#available-modules","title":"Available modules","text":"The overview below shows which UCC-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCC-CUDA, load one of these modules using a module load
command like:
module load UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/UCC/","title":"UCC","text":"UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.
https://www.openucx.org/
"},{"location":"available_software/detail/UCC/#available-modules","title":"Available modules","text":"The overview below shows which UCC installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCC, load one of these modules using a module load
command like:
module load UCC/1.2.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC/1.2.0-GCCcore-13.2.0 x x x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/UCX-CUDA/","title":"UCX-CUDA","text":"Unified Communication XAn open-source production grade communication framework for data centricand high-performance applicationsThis module adds the UCX CUDA support.
http://www.openucx.org/
"},{"location":"available_software/detail/UCX-CUDA/#available-modules","title":"Available modules","text":"The overview below shows which UCX-CUDA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCX-CUDA, load one of these modules using a module load
command like:
module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x x x"},{"location":"available_software/detail/UCX/","title":"UCX","text":"Unified Communication XAn open-source production grade communication framework for data centricand high-performance applications
https://www.openucx.org/
"},{"location":"available_software/detail/UCX/#available-modules","title":"Available modules","text":"The overview below shows which UCX installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UCX, load one of these modules using a module load
command like:
module load UCX/1.15.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX/1.15.0-GCCcore-13.2.0 x x x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/UDUNITS/","title":"UDUNITS","text":"UDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.
https://www.unidata.ucar.edu/software/udunits/
"},{"location":"available_software/detail/UDUNITS/#available-modules","title":"Available modules","text":"The overview below shows which UDUNITS installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UDUNITS, load one of these modules using a module load
command like:
module load UDUNITS/2.2.28-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UDUNITS/2.2.28-GCCcore-13.2.0 x x x x x x x x UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/UnZip/","title":"UnZip","text":"UnZip is an extraction utility for archives compressedin .zip format (also called \"zipfiles\"). Although highly compatible bothwith PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP'sown Zip program, our primary objectives have been portability andnon-MSDOS functionality.
http://www.info-zip.org/UnZip.html
"},{"location":"available_software/detail/UnZip/#available-modules","title":"Available modules","text":"The overview below shows which UnZip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using UnZip, load one of these modules using a module load
command like:
module load UnZip/6.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 UnZip/6.0-GCCcore-13.2.0 x x x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/VTK/","title":"VTK","text":"The Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.
https://www.vtk.org
"},{"location":"available_software/detail/VTK/#available-modules","title":"Available modules","text":"The overview below shows which VTK installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using VTK, load one of these modules using a module load
command like:
module load VTK/9.3.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 VTK/9.3.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/Voro%2B%2B/","title":"Voro++","text":"Voro++ is a software library for carrying out three-dimensional computations of the Voronoitessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations,computing the Voronoi cell for each particle individually. It is particularly well-suited for applications thatrely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be usedto analyze a system of particles.
http://math.lbl.gov/voro++/
"},{"location":"available_software/detail/Voro%2B%2B/#available-modules","title":"Available modules","text":"The overview below shows which Voro++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Voro++, load one of these modules using a module load
command like:
module load Voro++/0.4.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Voro++/0.4.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/WCSLIB/","title":"WCSLIB","text":"The FITS \"World Coordinate System\" (WCS) standard defines keywordsand usage that provide for the description of astronomical coordinate systems in aFITS image header.
https://www.atnf.csiro.au/people/mcalabre/WCS/
"},{"location":"available_software/detail/WCSLIB/#available-modules","title":"Available modules","text":"The overview below shows which WCSLIB installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WCSLIB, load one of these modules using a module load
command like:
module load WCSLIB/7.11-GCC-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WCSLIB/7.11-GCC-13.2.0 x x x x x x x x"},{"location":"available_software/detail/WRF/","title":"WRF","text":"The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.
https://www.wrf-model.org
"},{"location":"available_software/detail/WRF/#available-modules","title":"Available modules","text":"The overview below shows which WRF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WRF, load one of these modules using a module load
command like:
module load WRF/4.4.1-foss-2022b-dmpar\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WRF/4.4.1-foss-2022b-dmpar x x x x x x x x"},{"location":"available_software/detail/WSClean/","title":"WSClean","text":"WSClean (w-stacking clean) is a fast generic widefield imager.It implements several gridding algorithms and offers fully-automated multi-scalemulti-frequency deconvolution.
https://wsclean.readthedocs.io/
"},{"location":"available_software/detail/WSClean/#available-modules","title":"Available modules","text":"The overview below shows which WSClean installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using WSClean, load one of these modules using a module load
command like:
module load WSClean/3.4-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 WSClean/3.4-foss-2023b x x x x x x x x"},{"location":"available_software/detail/Wayland/","title":"Wayland","text":"Wayland is a project to define a protocol for a compositor to talk to its clients as well as a library implementation of the protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.
https://wayland.freedesktop.org/
"},{"location":"available_software/detail/Wayland/#available-modules","title":"Available modules","text":"The overview below shows which Wayland installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Wayland, load one of these modules using a module load
command like:
module load Wayland/1.22.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Wayland/1.22.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/X11/","title":"X11","text":"The X Window System (X11) is a windowing system for bitmap displays
https://www.x.org
"},{"location":"available_software/detail/X11/#available-modules","title":"Available modules","text":"The overview below shows which X11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using X11, load one of these modules using a module load
command like:
module load X11/20231019-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 X11/20231019-GCCcore-13.2.0 x x x x x x x x X11/20230603-GCCcore-12.3.0 x x x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/Xerces-C%2B%2B/","title":"Xerces-C++","text":"Xerces-C++ is a validating XML parser written in a portablesubset of C++. Xerces-C++ makes it easy to give your application the ability toread and write XML data. A shared library is provided for parsing, generating,manipulating, and validating XML documents using the DOM, SAX, and SAX2APIs.
https://xerces.apache.org/xerces-c/
"},{"location":"available_software/detail/Xerces-C%2B%2B/#available-modules","title":"Available modules","text":"The overview below shows which Xerces-C++ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Xerces-C++, load one of these modules using a module load
command like:
module load Xerces-C++/3.2.4-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/YODA/","title":"YODA","text":"Yet more Objects for (High Energy Physics) Data Analysis
https://yoda.hepforge.org/
"},{"location":"available_software/detail/YODA/#available-modules","title":"Available modules","text":"The overview below shows which YODA installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using YODA, load one of these modules using a module load
command like:
module load YODA/1.9.9-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 YODA/1.9.9-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Yasm/","title":"Yasm","text":"Yasm: Complete rewrite of the NASM assembler with BSD license
https://www.tortall.net/projects/yasm/
"},{"location":"available_software/detail/Yasm/#available-modules","title":"Available modules","text":"The overview below shows which Yasm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Yasm, load one of these modules using a module load
command like:
module load Yasm/1.3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Yasm/1.3.0-GCCcore-12.3.0 - - - x x x x x"},{"location":"available_software/detail/Z3/","title":"Z3","text":"Z3 is a theorem prover from Microsoft Research with support for bitvectors,booleans, arrays, floating point numbers, strings, and other data types. Thismodule includes z3-solver, the Python interface of Z3.
https://github.com/Z3Prover/z3
"},{"location":"available_software/detail/Z3/#available-modules","title":"Available modules","text":"The overview below shows which Z3 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Z3, load one of these modules using a module load
command like:
module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x x x"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230-python-3113","title":"Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3","text":"This is a list of extensions included in the module:
z3-solver-4.12.2.0
"},{"location":"available_software/detail/ZeroMQ/","title":"ZeroMQ","text":"ZeroMQ looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems.
https://www.zeromq.org/
"},{"location":"available_software/detail/ZeroMQ/#available-modules","title":"Available modules","text":"The overview below shows which ZeroMQ installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ZeroMQ, load one of these modules using a module load
command like:
module load ZeroMQ/4.3.4-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/Zip/","title":"Zip","text":"Zip is a compression and file packaging/archive utility.Although highly compatible both with PKWARE's PKZIP and PKUNZIPutilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectiveshave been portability and other-than-MSDOS functionality
http://www.info-zip.org/Zip.html
"},{"location":"available_software/detail/Zip/#available-modules","title":"Available modules","text":"The overview below shows which Zip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using Zip, load one of these modules using a module load
command like:
module load Zip/3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 Zip/3.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/archspec/","title":"archspec","text":"A library for detecting, labeling, and reasoning about microarchitectures
https://github.com/archspec/archspec
"},{"location":"available_software/detail/archspec/#available-modules","title":"Available modules","text":"The overview below shows which archspec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using archspec, load one of these modules using a module load
command like:
module load archspec/0.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 archspec/0.2.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/arpack-ng/","title":"arpack-ng","text":"ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.
https://github.com/opencollab/arpack-ng
"},{"location":"available_software/detail/arpack-ng/#available-modules","title":"Available modules","text":"The overview below shows which arpack-ng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using arpack-ng, load one of these modules using a module load
command like:
module load arpack-ng/3.9.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 arpack-ng/3.9.0-foss-2023b x x x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x x x"},{"location":"available_software/detail/at-spi2-atk/","title":"at-spi2-atk","text":"AT-SPI 2 toolkit bridge
https://wiki.gnome.org/Accessibility
"},{"location":"available_software/detail/at-spi2-atk/#available-modules","title":"Available modules","text":"The overview below shows which at-spi2-atk installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using at-spi2-atk, load one of these modules using a module load
command like:
module load at-spi2-atk/2.38.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/at-spi2-core/","title":"at-spi2-core","text":"Assistive Technology Service Provider Interface.
https://wiki.gnome.org/Accessibility
"},{"location":"available_software/detail/at-spi2-core/#available-modules","title":"Available modules","text":"The overview below shows which at-spi2-core installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using at-spi2-core, load one of these modules using a module load
command like:
module load at-spi2-core/2.49.91-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-core/2.49.91-GCCcore-12.3.0 x x x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/bokeh/","title":"bokeh","text":"Statistical and novel interactive HTML plots for Python
https://github.com/bokeh/bokeh
"},{"location":"available_software/detail/bokeh/#available-modules","title":"Available modules","text":"The overview below shows which bokeh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using bokeh, load one of these modules using a module load
command like:
module load bokeh/3.2.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 bokeh/3.2.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/bokeh/#bokeh322-foss-2023a","title":"bokeh/3.2.2-foss-2023a","text":"This is a list of extensions included in the module:
bokeh-3.2.2, contourpy-1.0.7, xyzservices-2023.7.0
"},{"location":"available_software/detail/cURL/","title":"cURL","text":"libcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.
https://curl.haxx.se
"},{"location":"available_software/detail/cURL/#available-modules","title":"Available modules","text":"The overview below shows which cURL installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cURL, load one of these modules using a module load
command like:
module load cURL/8.3.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cURL/8.3.0-GCCcore-13.2.0 x x x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/cairo/","title":"cairo","text":"Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB
https://cairographics.org
"},{"location":"available_software/detail/cairo/#available-modules","title":"Available modules","text":"The overview below shows which cairo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cairo, load one of these modules using a module load
command like:
module load cairo/1.17.8-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cairo/1.17.8-GCCcore-12.3.0 x x x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/casacore/","title":"casacore","text":"A suite of C++ libraries for radio astronomy data processing.The ephemerides data needs to be in DATA_DIR and the location must be specified at runtime.Thus user's can update them.
https://github.com/casacore/casacore
"},{"location":"available_software/detail/casacore/#available-modules","title":"Available modules","text":"The overview below shows which casacore installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using casacore, load one of these modules using a module load
command like:
module load casacore/3.5.0-foss-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 casacore/3.5.0-foss-2023b x x x x x x x x"},{"location":"available_software/detail/cffi/","title":"cffi","text":"C Foreign Function Interface for Python. Interact with almost any C code fromPython, based on C-like declarations that you can often copy-paste from headerfiles or documentation.
https://cffi.readthedocs.io/en/latest/
"},{"location":"available_software/detail/cffi/#available-modules","title":"Available modules","text":"The overview below shows which cffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cffi, load one of these modules using a module load
command like:
module load cffi/1.15.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cffi/1.15.1-GCCcore-13.2.0 x x x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1320","title":"cffi/1.15.1-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
cffi-1.15.1, pycparser-2.21
"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1230","title":"cffi/1.15.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
cffi-1.15.1, pycparser-2.21
"},{"location":"available_software/detail/cppy/","title":"cppy","text":"A small C++ header library which makes it easier to writePython extension modules. The primary feature is a PyObject smart pointerwhich automatically handles reference counting and provides conveniencemethods for performing common object operations.
https://github.com/nucleic/cppy
"},{"location":"available_software/detail/cppy/#available-modules","title":"Available modules","text":"The overview below shows which cppy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cppy, load one of these modules using a module load
command like:
module load cppy/1.2.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cppy/1.2.1-GCCcore-13.2.0 x x x x x x x x cppy/1.2.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/cryptography/","title":"cryptography","text":"cryptography is a package designed to expose cryptographic primitives and recipes to Python developers.
https://github.com/pyca/cryptography
"},{"location":"available_software/detail/cryptography/#available-modules","title":"Available modules","text":"The overview below shows which cryptography installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using cryptography, load one of these modules using a module load
command like:
module load cryptography/41.0.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 cryptography/41.0.5-GCCcore-13.2.0 x x x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/dask/","title":"dask","text":"Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.
https://dask.org/
"},{"location":"available_software/detail/dask/#available-modules","title":"Available modules","text":"The overview below shows which dask installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using dask, load one of these modules using a module load
command like:
module load dask/2023.9.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 dask/2023.9.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/dask/#dask202392-foss-2023a","title":"dask/2023.9.2-foss-2023a","text":"This is a list of extensions included in the module:
dask-2023.9.2, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.9.2, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, zict-3.0.0
"},{"location":"available_software/detail/dill/","title":"dill","text":"dill extends python's pickle module for serializing and de-serializing python objects to the majority of the built-in python types. Serialization is the process of converting an object to a byte stream, and the inverse of which is converting a byte stream back to on python object hierarchy.
https://pypi.org/project/dill/
"},{"location":"available_software/detail/dill/#available-modules","title":"Available modules","text":"The overview below shows which dill installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using dill, load one of these modules using a module load
command like:
module load dill/0.3.7-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 dill/0.3.7-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/double-conversion/","title":"double-conversion","text":"Efficient binary-decimal and decimal-binary conversion routines for IEEE doubles.
https://github.com/google/double-conversion
"},{"location":"available_software/detail/double-conversion/#available-modules","title":"Available modules","text":"The overview below shows which double-conversion installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using double-conversion, load one of these modules using a module load
command like:
module load double-conversion/3.3.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/ecBuild/","title":"ecBuild","text":"A CMake-based build system, consisting of a collection of CMake macros andfunctions that ease the managing of software build systems
https://ecbuild.readthedocs.io/
"},{"location":"available_software/detail/ecBuild/#available-modules","title":"Available modules","text":"The overview below shows which ecBuild installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ecBuild, load one of these modules using a module load
command like:
module load ecBuild/3.8.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecBuild/3.8.0 x x x x x x x x"},{"location":"available_software/detail/ecCodes/","title":"ecCodes","text":"ecCodes is a package developed by ECMWF which provides an application programming interface and a set of tools for decoding and encoding messages in the following formats: WMO FM-92 GRIB edition 1 and edition 2, WMO FM-94 BUFR edition 3 and edition 4, WMO GTS abbreviated header (only decoding).
https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home
"},{"location":"available_software/detail/ecCodes/#available-modules","title":"Available modules","text":"The overview below shows which ecCodes installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ecCodes, load one of these modules using a module load
command like:
module load ecCodes/2.31.0-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecCodes/2.31.0-gompi-2023b x x x x x x x x ecCodes/2.31.0-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/expat/","title":"expat","text":"Expat is an XML parser library written in C. It is a stream-oriented parserin which an application registers handlers for things the parser might findin the XML document (like start tags).
https://libexpat.github.io
"},{"location":"available_software/detail/expat/#available-modules","title":"Available modules","text":"The overview below shows which expat installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using expat, load one of these modules using a module load
command like:
module load expat/2.5.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 expat/2.5.0-GCCcore-13.2.0 x x x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/expecttest/","title":"expecttest","text":"This library implements expect tests (also known as \"golden\" tests). Expect tests are a method of writing tests where instead of hard-coding the expected output of a test, you run the test to get the output, and the test framework automatically populates the expected output. If the output of the test changes, you can rerun the test with the environment variable EXPECTTEST_ACCEPT=1 to automatically update the expected output.
https://github.com/ezyang/expecttest
"},{"location":"available_software/detail/expecttest/#available-modules","title":"Available modules","text":"The overview below shows which expecttest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using expecttest, load one of these modules using a module load
command like:
module load expecttest/0.1.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 expecttest/0.1.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/fastjet-contrib/","title":"fastjet-contrib","text":"3rd party extensions of FastJet
https://fastjet.hepforge.org/contrib/
"},{"location":"available_software/detail/fastjet-contrib/#available-modules","title":"Available modules","text":"The overview below shows which fastjet-contrib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fastjet-contrib, load one of these modules using a module load
command like:
module load fastjet-contrib/1.053-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet-contrib/1.053-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/fastjet/","title":"fastjet","text":"A software package for jet finding in pp and e+e- collisions
https://fastjet.fr/
"},{"location":"available_software/detail/fastjet/#available-modules","title":"Available modules","text":"The overview below shows which fastjet installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fastjet, load one of these modules using a module load
command like:
module load fastjet/3.4.2-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet/3.4.2-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/ffnvcodec/","title":"ffnvcodec","text":"FFmpeg nvidia headers. Adds support for nvenc and nvdec. Requires Nvidia GPU and drivers to be present(picked up dynamically).
https://git.videolan.org/?p=ffmpeg/nv-codec-headers.git
"},{"location":"available_software/detail/ffnvcodec/#available-modules","title":"Available modules","text":"The overview below shows which ffnvcodec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using ffnvcodec, load one of these modules using a module load
command like:
module load ffnvcodec/12.0.16.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 ffnvcodec/12.0.16.0 x x x x x x x x"},{"location":"available_software/detail/flatbuffers-python/","title":"flatbuffers-python","text":"Python Flatbuffers runtime library.
https://github.com/google/flatbuffers/
"},{"location":"available_software/detail/flatbuffers-python/#available-modules","title":"Available modules","text":"The overview below shows which flatbuffers-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flatbuffers-python, load one of these modules using a module load
command like:
module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flatbuffers/","title":"flatbuffers","text":"FlatBuffers: Memory Efficient Serialization Library
https://github.com/google/flatbuffers/
"},{"location":"available_software/detail/flatbuffers/#available-modules","title":"Available modules","text":"The overview below shows which flatbuffers installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flatbuffers, load one of these modules using a module load
command like:
module load flatbuffers/23.5.26-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flit/","title":"flit","text":"A simple packaging tool for simple packages.
https://github.com/pypa/flit
"},{"location":"available_software/detail/flit/#available-modules","title":"Available modules","text":"The overview below shows which flit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using flit, load one of these modules using a module load
command like:
module load flit/3.9.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 flit/3.9.0-GCCcore-13.2.0 x x x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/flit/#flit390-gcccore-1320","title":"flit/3.9.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
certifi-2023.7.22, charset-normalizer-3.3.1, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.2, requests-2.31.0, setuptools-scm-8.0.4, tomli_w-1.0.0, typing_extensions-4.8.0, urllib3-2.0.7
"},{"location":"available_software/detail/flit/#flit390-gcccore-1230","title":"flit/3.9.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
certifi-2023.5.7, charset-normalizer-3.1.0, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.1, requests-2.31.0, setuptools_scm-7.1.0, tomli_w-1.0.0, typing_extensions-4.6.3, urllib3-1.26.16
"},{"location":"available_software/detail/fontconfig/","title":"fontconfig","text":"Fontconfig is a library designed to provide system-wide font configuration, customization and application access.
https://www.freedesktop.org/wiki/Software/fontconfig/
"},{"location":"available_software/detail/fontconfig/#available-modules","title":"Available modules","text":"The overview below shows which fontconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using fontconfig, load one of these modules using a module load
command like:
module load fontconfig/2.14.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 fontconfig/2.14.2-GCCcore-13.2.0 x x x x x x x x fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/foss/","title":"foss","text":"GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.
https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain
"},{"location":"available_software/detail/foss/#available-modules","title":"Available modules","text":"The overview below shows which foss installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using foss, load one of these modules using a module load
command like:
module load foss/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 foss/2023b x x x x x x x x foss/2023a x x x x x x x x foss/2022b x x x x x x x x"},{"location":"available_software/detail/freetype/","title":"freetype","text":"FreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.
https://www.freetype.org
"},{"location":"available_software/detail/freetype/#available-modules","title":"Available modules","text":"The overview below shows which freetype installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using freetype, load one of these modules using a module load
command like:
module load freetype/2.13.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 freetype/2.13.2-GCCcore-13.2.0 x x x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/gfbf/","title":"gfbf","text":"GNU Compiler Collection (GCC) based compiler toolchain, including FlexiBLAS (BLAS and LAPACK support) and (serial) FFTW.
(none)
"},{"location":"available_software/detail/gfbf/#available-modules","title":"Available modules","text":"The overview below shows which gfbf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gfbf, load one of these modules using a module load
command like:
module load gfbf/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gfbf/2023b x x x x x x x x gfbf/2023a x x x x x x x x gfbf/2022b x x x x x x x x"},{"location":"available_software/detail/giflib/","title":"giflib","text":"giflib is a library for reading and writing gif images.It is API and ABI compatible with libungif which was in wide use whilethe LZW compression algorithm was patented.
http://giflib.sourceforge.net/
"},{"location":"available_software/detail/giflib/#available-modules","title":"Available modules","text":"The overview below shows which giflib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using giflib, load one of these modules using a module load
command like:
module load giflib/5.2.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 giflib/5.2.1-GCCcore-12.3.0 x x x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/git/","title":"git","text":"Git is a free and open source distributed version control system designedto handle everything from small to very large projects with speed and efficiency.
https://git-scm.com
"},{"location":"available_software/detail/git/#available-modules","title":"Available modules","text":"The overview below shows which git installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using git, load one of these modules using a module load
command like:
module load git/2.42.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 git/2.42.0-GCCcore-13.2.0 x x x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x x x"},{"location":"available_software/detail/gmpy2/","title":"gmpy2","text":"GMP/MPIR, MPFR, and MPC interface to Python 2.6+ and 3.x
https://github.com/aleaxit/gmpy
"},{"location":"available_software/detail/gmpy2/#available-modules","title":"Available modules","text":"The overview below shows which gmpy2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gmpy2, load one of these modules using a module load
command like:
module load gmpy2/2.1.5-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmpy2/2.1.5-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/gnuplot/","title":"gnuplot","text":"Portable interactive, function plotting utility
http://gnuplot.sourceforge.net
"},{"location":"available_software/detail/gnuplot/#available-modules","title":"Available modules","text":"The overview below shows which gnuplot installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gnuplot, load one of these modules using a module load
command like:
module load gnuplot/5.4.8-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/gompi/","title":"gompi","text":"GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.
(none)
"},{"location":"available_software/detail/gompi/#available-modules","title":"Available modules","text":"The overview below shows which gompi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gompi, load one of these modules using a module load
command like:
module load gompi/2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gompi/2023b x x x x x x x x gompi/2023a x x x x x x x x gompi/2022b x x x x x x x x"},{"location":"available_software/detail/googletest/","title":"googletest","text":"Google's framework for writing C++ tests on a variety of platforms
https://github.com/google/googletest
"},{"location":"available_software/detail/googletest/#available-modules","title":"Available modules","text":"The overview below shows which googletest installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using googletest, load one of these modules using a module load
command like:
module load googletest/1.14.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 googletest/1.14.0-GCCcore-13.2.0 x x x x x x x x googletest/1.13.0-GCCcore-12.3.0 x x x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/graphite2/","title":"graphite2","text":"Graphite is a \"smart font\" system developed specifically to handle the complexities of lesser-known languages of the world.
https://scripts.sil.org/cms/scripts/page.php?site_id=projects&item_id=graphite_home
"},{"location":"available_software/detail/graphite2/#available-modules","title":"Available modules","text":"The overview below shows which graphite2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using graphite2, load one of these modules using a module load
command like:
module load graphite2/1.3.14-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 graphite2/1.3.14-GCCcore-12.3.0 x x x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/groff/","title":"groff","text":"Groff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.
https://www.gnu.org/software/groff
"},{"location":"available_software/detail/groff/#available-modules","title":"Available modules","text":"The overview below shows which groff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using groff, load one of these modules using a module load
command like:
module load groff/1.22.4-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 groff/1.22.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/gzip/","title":"gzip","text":"gzip (GNU zip) is a popular data compression program as a replacement for compress
https://www.gnu.org/software/gzip/
"},{"location":"available_software/detail/gzip/#available-modules","title":"Available modules","text":"The overview below shows which gzip installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using gzip, load one of these modules using a module load
command like:
module load gzip/1.13-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 gzip/1.13-GCCcore-13.2.0 x x x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/h5py/","title":"h5py","text":"HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.
https://www.h5py.org/
"},{"location":"available_software/detail/h5py/#available-modules","title":"Available modules","text":"The overview below shows which h5py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using h5py, load one of these modules using a module load
command like:
module load h5py/3.9.0-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5py/3.9.0-foss-2023a x x x x x x x x"},{"location":"available_software/detail/hatchling/","title":"hatchling","text":"Extensible, standards compliant build backend used by Hatch,a modern, extensible Python project manager.
https://hatch.pypa.io
"},{"location":"available_software/detail/hatchling/#available-modules","title":"Available modules","text":"The overview below shows which hatchling installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hatchling, load one of these modules using a module load
command like:
module load hatchling/1.18.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatchling/1.18.0-GCCcore-13.2.0 x x x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1320","title":"hatchling/1.18.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
editables-0.5, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, packaging-23.2, pathspec-0.11.2, pluggy-1.3.0, setuptools-scm-8.0.4, tomli-2.0.1, trove_classifiers-2023.10.18, typing_extensions-4.8.0
"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1230","title":"hatchling/1.18.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
editables-0.3, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, packaging-23.1, pathspec-0.11.1, pluggy-1.2.0, setuptools_scm-7.1.0, tomli-2.0.1, trove_classifiers-2023.5.24, typing_extensions-4.6.3
"},{"location":"available_software/detail/hwloc/","title":"hwloc","text":"The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.
https://www.open-mpi.org/projects/hwloc/
"},{"location":"available_software/detail/hwloc/#available-modules","title":"Available modules","text":"The overview below shows which hwloc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hwloc, load one of these modules using a module load
command like:
module load hwloc/2.9.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hwloc/2.9.2-GCCcore-13.2.0 x x x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/hypothesis/","title":"hypothesis","text":"Hypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.
https://github.com/HypothesisWorks/hypothesis
"},{"location":"available_software/detail/hypothesis/#available-modules","title":"Available modules","text":"The overview below shows which hypothesis installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using hypothesis, load one of these modules using a module load
command like:
module load hypothesis/6.90.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/jbigkit/","title":"jbigkit","text":"JBIG-KIT is a software implementation of the JBIG1 data compression standard (ITU-T T.82), which was designed for bi-level image data, such as scanned documents.
https://www.cl.cam.ac.uk/~mgk25/jbigkit/
"},{"location":"available_software/detail/jbigkit/#available-modules","title":"Available modules","text":"The overview below shows which jbigkit installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using jbigkit, load one of these modules using a module load
command like:
module load jbigkit/2.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 jbigkit/2.1-GCCcore-13.2.0 x x x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/json-c/","title":"json-c","text":"JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C, output them as JSON formatted strings and parse JSON formatted strings back into the C representation of JSONobjects.
https://github.com/json-c/json-c
"},{"location":"available_software/detail/json-c/#available-modules","title":"Available modules","text":"The overview below shows which json-c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using json-c, load one of these modules using a module load
command like:
module load json-c/0.16-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 json-c/0.16-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/jupyter-server/","title":"jupyter-server","text":"The Jupyter Server provides the backend (i.e. the core services, APIs, and RESTendpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, andVoila.
https://jupyter.org/
"},{"location":"available_software/detail/jupyter-server/#available-modules","title":"Available modules","text":"The overview below shows which jupyter-server installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using jupyter-server, load one of these modules using a module load
command like:
module load jupyter-server/2.7.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/jupyter-server/#jupyter-server272-gcccore-1230","title":"jupyter-server/2.7.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
anyio-3.7.1, argon2-cffi-bindings-21.2.0, argon2_cffi-23.1.0, arrow-1.2.3, bleach-6.0.0, comm-0.1.4, debugpy-1.6.7.post1, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.18.0, hatch_jupyter_builder-0.8.3, hatch_nodejs_version-0.3.1, ipykernel-6.25.1, ipython_genutils-0.2.0, ipywidgets-8.1.0, jsonschema-4.18.0, jsonschema_specifications-2023.7.1, jupyter_client-8.3.0, jupyter_core-5.3.1, jupyter_events-0.7.0, jupyter_packaging-0.12.3, jupyter_server-2.7.2, jupyter_server_terminals-0.4.4, jupyterlab_pygments-0.2.2, jupyterlab_widgets-3.0.8, mistune-3.0.1, nbclient-0.8.0, nbconvert-7.7.4, nbformat-5.9.2, nest_asyncio-1.5.7, notebook_shim-0.2.3, overrides-7.4.0, pandocfilters-1.5.0, prometheus_client-0.17.1, python-json-logger-2.0.7, referencing-0.30.2, rfc3339_validator-0.1.4, rfc3986_validator-0.1.1, rpds_py-0.9.2, Send2Trash-1.8.2, sniffio-1.3.0, terminado-0.17.1, tinycss2-1.2.1, websocket-client-1.6.1, widgetsnbextension-4.0.8
"},{"location":"available_software/detail/kim-api/","title":"kim-api","text":"Open Knowledgebase of Interatomic Models.KIM is an API and OpenKIM is a collection of interatomic models (potentials) foratomistic simulations. This is a library that can be used by simulation programsto get access to the models in the OpenKIM database.This EasyBuild only installs the API, the models can be installed with thepackage openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAMEor kim-api-collections-management install user OpenKIMto install them all.
https://openkim.org/
"},{"location":"available_software/detail/kim-api/#available-modules","title":"Available modules","text":"The overview below shows which kim-api installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using kim-api, load one of these modules using a module load
command like:
module load kim-api/2.3.0-GCC-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 kim-api/2.3.0-GCC-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libGLU/","title":"libGLU","text":"The OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.
https://mesa.freedesktop.org/archive/glu/
"},{"location":"available_software/detail/libGLU/#available-modules","title":"Available modules","text":"The overview below shows which libGLU installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libGLU, load one of these modules using a module load
command like:
module load libGLU/9.0.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libGLU/9.0.3-GCCcore-12.3.0 x x x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libaec/","title":"libaec","text":"Libaec provides fast lossless compression of 1 up to 32 bit wide signed or unsigned integers(samples). The library achieves best results for low entropy data as often encountered in space imaginginstrument data or numerical model output from weather or climate simulations. While floating point representationsare not directly supported, they can also be efficiently coded by grouping exponents and mantissa.
https://gitlab.dkrz.de/k202009/libaec
"},{"location":"available_software/detail/libaec/#available-modules","title":"Available modules","text":"The overview below shows which libaec installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libaec, load one of these modules using a module load
command like:
module load libaec/1.0.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaec/1.0.6-GCCcore-13.2.0 x x x x x x x x libaec/1.0.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libarchive/","title":"libarchive","text":"Multi-format archive and compression library
https://www.libarchive.org/
"},{"location":"available_software/detail/libarchive/#available-modules","title":"Available modules","text":"The overview below shows which libarchive installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libarchive, load one of these modules using a module load
command like:
module load libarchive/3.7.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libarchive/3.7.2-GCCcore-13.2.0 x x x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libcerf/","title":"libcerf","text":"libcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.
https://jugit.fz-juelich.de/mlz/libcerf
"},{"location":"available_software/detail/libcerf/#available-modules","title":"Available modules","text":"The overview below shows which libcerf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libcerf, load one of these modules using a module load
command like:
module load libcerf/2.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcerf/2.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libdeflate/","title":"libdeflate","text":"Heavily optimized library for DEFLATE/zlib/gzip compression and decompression.
https://github.com/ebiggers/libdeflate
"},{"location":"available_software/detail/libdeflate/#available-modules","title":"Available modules","text":"The overview below shows which libdeflate installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libdeflate, load one of these modules using a module load
command like:
module load libdeflate/1.19-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdeflate/1.19-GCCcore-13.2.0 x x x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libdrm/","title":"libdrm","text":"Direct Rendering Manager runtime library.
https://dri.freedesktop.org
"},{"location":"available_software/detail/libdrm/#available-modules","title":"Available modules","text":"The overview below shows which libdrm installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libdrm, load one of these modules using a module load
command like:
module load libdrm/2.4.115-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdrm/2.4.115-GCCcore-12.3.0 x x x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libepoxy/","title":"libepoxy","text":"Epoxy is a library for handling OpenGL function pointer management for you
https://github.com/anholt/libepoxy
"},{"location":"available_software/detail/libepoxy/#available-modules","title":"Available modules","text":"The overview below shows which libepoxy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libepoxy, load one of these modules using a module load
command like:
module load libepoxy/1.5.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libevent/","title":"libevent","text":"The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.
https://libevent.org/
"},{"location":"available_software/detail/libevent/#available-modules","title":"Available modules","text":"The overview below shows which libevent installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libevent, load one of these modules using a module load
command like:
module load libevent/2.1.12-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libevent/2.1.12-GCCcore-13.2.0 x x x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libfabric/","title":"libfabric","text":"Libfabric is a core component of OFI. It is the library that defines and exportsthe user-space API of OFI, and is typically the only software that applicationsdeal with directly. It works in conjunction with provider libraries, which areoften integrated directly into libfabric.
https://ofiwg.github.io/libfabric/
"},{"location":"available_software/detail/libfabric/#available-modules","title":"Available modules","text":"The overview below shows which libfabric installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libfabric, load one of these modules using a module load
command like:
module load libfabric/1.19.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libfabric/1.19.0-GCCcore-13.2.0 x x x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libffi/","title":"libffi","text":"The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.
https://sourceware.org/libffi/
"},{"location":"available_software/detail/libffi/#available-modules","title":"Available modules","text":"The overview below shows which libffi installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libffi, load one of these modules using a module load
command like:
module load libffi/3.4.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libffi/3.4.4-GCCcore-13.2.0 x x x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libgd/","title":"libgd","text":"GD is an open source code library for the dynamic creation of images by programmers.
https://libgd.github.io
"},{"location":"available_software/detail/libgd/#available-modules","title":"Available modules","text":"The overview below shows which libgd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgd, load one of these modules using a module load
command like:
module load libgd/2.3.3-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgd/2.3.3-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libgeotiff/","title":"libgeotiff","text":"Library for reading and writing coordinate system information from/to GeoTIFF files
https://directory.fsf.org/wiki/Libgeotiff
"},{"location":"available_software/detail/libgeotiff/#available-modules","title":"Available modules","text":"The overview below shows which libgeotiff installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgeotiff, load one of these modules using a module load
command like:
module load libgeotiff/1.7.1-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libgit2/","title":"libgit2","text":"libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrantlinkable library with a solid API, allowing you to write native speed custom Git applications in any languagewhich supports C bindings.
https://libgit2.org/
"},{"location":"available_software/detail/libgit2/#available-modules","title":"Available modules","text":"The overview below shows which libgit2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libgit2, load one of these modules using a module load
command like:
module load libgit2/1.7.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgit2/1.7.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libglvnd/","title":"libglvnd","text":"libglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.
https://gitlab.freedesktop.org/glvnd/libglvnd
"},{"location":"available_software/detail/libglvnd/#available-modules","title":"Available modules","text":"The overview below shows which libglvnd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libglvnd, load one of these modules using a module load
command like:
module load libglvnd/1.6.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libiconv/","title":"libiconv","text":"Libiconv converts from one character encoding to another through Unicode conversion
https://www.gnu.org/software/libiconv
"},{"location":"available_software/detail/libiconv/#available-modules","title":"Available modules","text":"The overview below shows which libiconv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libiconv, load one of these modules using a module load
command like:
module load libiconv/1.17-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libiconv/1.17-GCCcore-13.2.0 x x x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libidn2/","title":"libidn2","text":"Libidn2 implements the revised algorithm for internationalized domain names called IDNA2008/TR46.
http://www.gnu.org/software/libidn2
"},{"location":"available_software/detail/libidn2/#available-modules","title":"Available modules","text":"The overview below shows which libidn2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libidn2, load one of these modules using a module load
command like:
module load libidn2/2.3.2-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libidn2/2.3.2-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/libjpeg-turbo/","title":"libjpeg-turbo","text":"libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.
https://sourceforge.net/projects/libjpeg-turbo/
"},{"location":"available_software/detail/libjpeg-turbo/#available-modules","title":"Available modules","text":"The overview below shows which libjpeg-turbo installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libjpeg-turbo, load one of these modules using a module load
command like:
module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libpciaccess/","title":"libpciaccess","text":"Generic PCI access library.
https://cgit.freedesktop.org/xorg/lib/libpciaccess/
"},{"location":"available_software/detail/libpciaccess/#available-modules","title":"Available modules","text":"The overview below shows which libpciaccess installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libpciaccess, load one of these modules using a module load
command like:
module load libpciaccess/0.17-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpciaccess/0.17-GCCcore-13.2.0 x x x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libpng/","title":"libpng","text":"libpng is the official PNG reference library
http://www.libpng.org/pub/png/libpng.html
"},{"location":"available_software/detail/libpng/#available-modules","title":"Available modules","text":"The overview below shows which libpng installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libpng, load one of these modules using a module load
command like:
module load libpng/1.6.40-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpng/1.6.40-GCCcore-13.2.0 x x x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libsodium/","title":"libsodium","text":"Sodium is a modern, easy-to-use software library for encryption, decryption, signatures, password hashing and more.
https://doc.libsodium.org/
"},{"location":"available_software/detail/libsodium/#available-modules","title":"Available modules","text":"The overview below shows which libsodium installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libsodium, load one of these modules using a module load
command like:
module load libsodium/1.0.18-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsodium/1.0.18-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libtirpc/","title":"libtirpc","text":"Libtirpc is a port of Suns Transport-Independent RPC library to Linux.
https://sourceforge.net/projects/libtirpc/
"},{"location":"available_software/detail/libtirpc/#available-modules","title":"Available modules","text":"The overview below shows which libtirpc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libtirpc, load one of these modules using a module load
command like:
module load libtirpc/1.3.3-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libunwind/","title":"libunwind","text":"The primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications
https://www.nongnu.org/libunwind/
"},{"location":"available_software/detail/libunwind/#available-modules","title":"Available modules","text":"The overview below shows which libunwind installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libunwind, load one of these modules using a module load
command like:
module load libunwind/1.6.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libunwind/1.6.2-GCCcore-12.3.0 x x x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libwebp/","title":"libwebp","text":"WebP is a modern image format that provides superiorlossless and lossy compression for images on the web. Using WebP,webmasters and web developers can create smaller, richer images thatmake the web faster.
https://developers.google.com/speed/webp/
"},{"location":"available_software/detail/libwebp/#available-modules","title":"Available modules","text":"The overview below shows which libwebp installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libwebp, load one of these modules using a module load
command like:
module load libwebp/1.3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libwebp/1.3.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libxc/","title":"libxc","text":"Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.
https://www.tddft.org/programs/libxc
"},{"location":"available_software/detail/libxc/#available-modules","title":"Available modules","text":"The overview below shows which libxc installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxc, load one of these modules using a module load
command like:
module load libxc/6.1.0-GCC-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxc/6.1.0-GCC-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libxml2/","title":"libxml2","text":"Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).
http://xmlsoft.org/
"},{"location":"available_software/detail/libxml2/#available-modules","title":"Available modules","text":"The overview below shows which libxml2 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxml2, load one of these modules using a module load
command like:
module load libxml2/2.11.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2/2.11.5-GCCcore-13.2.0 x x x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/libxslt/","title":"libxslt","text":"Libxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).
http://xmlsoft.org/
"},{"location":"available_software/detail/libxslt/#available-modules","title":"Available modules","text":"The overview below shows which libxslt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libxslt, load one of these modules using a module load
command like:
module load libxslt/1.1.38-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxslt/1.1.38-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/libyaml/","title":"libyaml","text":"LibYAML is a YAML parser and emitter written in C.
https://pyyaml.org/wiki/LibYAML
"},{"location":"available_software/detail/libyaml/#available-modules","title":"Available modules","text":"The overview below shows which libyaml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using libyaml, load one of these modules using a module load
command like:
module load libyaml/0.2.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 libyaml/0.2.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/lxml/","title":"lxml","text":"The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.
https://lxml.de/
"},{"location":"available_software/detail/lxml/#available-modules","title":"Available modules","text":"The overview below shows which lxml installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using lxml, load one of these modules using a module load
command like:
module load lxml/4.9.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 lxml/4.9.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/lz4/","title":"lz4","text":"LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.
https://lz4.github.io/lz4/
"},{"location":"available_software/detail/lz4/#available-modules","title":"Available modules","text":"The overview below shows which lz4 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using lz4, load one of these modules using a module load
command like:
module load lz4/1.9.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 lz4/1.9.4-GCCcore-13.2.0 x x x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/make/","title":"make","text":"GNU version of make utility
https://www.gnu.org/software/make/make.html
"},{"location":"available_software/detail/make/#available-modules","title":"Available modules","text":"The overview below shows which make installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using make, load one of these modules using a module load
command like:
module load make/4.4.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 make/4.4.1-GCCcore-13.2.0 x x x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x x x make/4.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/matplotlib/","title":"matplotlib","text":"matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.
https://matplotlib.org
"},{"location":"available_software/detail/matplotlib/#available-modules","title":"Available modules","text":"The overview below shows which matplotlib installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using matplotlib, load one of these modules using a module load
command like:
module load matplotlib/3.8.2-gfbf-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 matplotlib/3.8.2-gfbf-2023b x x x x x x x x matplotlib/3.7.2-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/matplotlib/#matplotlib382-gfbf-2023b","title":"matplotlib/3.8.2-gfbf-2023b","text":"This is a list of extensions included in the module:
contourpy-1.2.0, Cycler-0.12.1, fonttools-4.47.0, kiwisolver-1.4.5, matplotlib-3.8.2
"},{"location":"available_software/detail/matplotlib/#matplotlib372-gfbf-2023a","title":"matplotlib/3.7.2-gfbf-2023a","text":"This is a list of extensions included in the module:
contourpy-1.1.0, Cycler-0.11.0, fonttools-4.42.0, kiwisolver-1.4.4, matplotlib-3.7.2
"},{"location":"available_software/detail/maturin/","title":"maturin","text":"This project is meant as a zero configurationreplacement for setuptools-rust and milksnake. It supports buildingwheels for python 3.5+ on windows, linux, mac and freebsd, can uploadthem to pypi and has basic pypy and graalpy support.
https://github.com/pyo3/maturin
"},{"location":"available_software/detail/maturin/#available-modules","title":"Available modules","text":"The overview below shows which maturin installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using maturin, load one of these modules using a module load
command like:
module load maturin/1.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 maturin/1.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/meson-python/","title":"meson-python","text":"Python build backend (PEP 517) for Meson projects
https://github.com/mesonbuild/meson-python
"},{"location":"available_software/detail/meson-python/#available-modules","title":"Available modules","text":"The overview below shows which meson-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using meson-python, load one of these modules using a module load
command like:
module load meson-python/0.15.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 meson-python/0.15.0-GCCcore-13.2.0 x x x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1320","title":"meson-python/0.15.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
meson-python-0.15.0, pyproject-metadata-0.7.1
"},{"location":"available_software/detail/meson-python/#meson-python0132-gcccore-1230","title":"meson-python/0.13.2-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
meson-python-0.13.2, pyproject-metadata-0.7.1
"},{"location":"available_software/detail/mpi4py/","title":"mpi4py","text":"MPI for Python (mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
https://github.com/mpi4py/mpi4py
"},{"location":"available_software/detail/mpi4py/#available-modules","title":"Available modules","text":"The overview below shows which mpi4py installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using mpi4py, load one of these modules using a module load
command like:
module load mpi4py/3.1.4-gompi-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpi4py/3.1.4-gompi-2023a x x x x x x x x"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2023a","title":"mpi4py/3.1.4-gompi-2023a","text":"This is a list of extensions included in the module:
mpi4py-3.1.4
"},{"location":"available_software/detail/netCDF-Fortran/","title":"netCDF-Fortran","text":"NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
https://www.unidata.ucar.edu/software/netcdf/
"},{"location":"available_software/detail/netCDF-Fortran/#available-modules","title":"Available modules","text":"The overview below shows which netCDF-Fortran installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using netCDF-Fortran, load one of these modules using a module load
command like:
module load netCDF-Fortran/4.6.0-gompi-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF-Fortran/4.6.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/netCDF/","title":"netCDF","text":"NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
https://www.unidata.ucar.edu/software/netcdf/
"},{"location":"available_software/detail/netCDF/#available-modules","title":"Available modules","text":"The overview below shows which netCDF installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using netCDF, load one of these modules using a module load
command like:
module load netCDF/4.9.2-gompi-2023b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF/4.9.2-gompi-2023b x x x x x x x x netCDF/4.9.2-gompi-2023a x x x x x x x x netCDF/4.9.0-gompi-2022b x x x x x x x x"},{"location":"available_software/detail/networkx/","title":"networkx","text":"NetworkX is a Python package for the creation, manipulation,and study of the structure, dynamics, and functions of complex networks.
https://pypi.python.org/pypi/networkx
"},{"location":"available_software/detail/networkx/#available-modules","title":"Available modules","text":"The overview below shows which networkx installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using networkx, load one of these modules using a module load
command like:
module load networkx/3.1-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 networkx/3.1-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/nlohmann_json/","title":"nlohmann_json","text":"JSON for Modern C++
https://github.com/nlohmann/json
"},{"location":"available_software/detail/nlohmann_json/#available-modules","title":"Available modules","text":"The overview below shows which nlohmann_json installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nlohmann_json, load one of these modules using a module load
command like:
module load nlohmann_json/3.11.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nlohmann_json/3.11.3-GCCcore-13.2.0 x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/nodejs/","title":"nodejs","text":"Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
https://nodejs.org
"},{"location":"available_software/detail/nodejs/#available-modules","title":"Available modules","text":"The overview below shows which nodejs installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nodejs, load one of these modules using a module load
command like:
module load nodejs/18.17.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nodejs/18.17.1-GCCcore-12.3.0 x x x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/nsync/","title":"nsync","text":"nsync is a C library that exports various synchronization primitives, such as mutexes
https://github.com/google/nsync
"},{"location":"available_software/detail/nsync/#available-modules","title":"Available modules","text":"The overview below shows which nsync installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using nsync, load one of these modules using a module load
command like:
module load nsync/1.26.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 nsync/1.26.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/numactl/","title":"numactl","text":"The numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.
https://github.com/numactl/numactl
"},{"location":"available_software/detail/numactl/#available-modules","title":"Available modules","text":"The overview below shows which numactl installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using numactl, load one of these modules using a module load
command like:
module load numactl/2.0.16-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 numactl/2.0.16-GCCcore-13.2.0 x x x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/patchelf/","title":"patchelf","text":"PatchELF is a small utility to modify the dynamic linker and RPATH of ELF executables.
https://github.com/NixOS/patchelf
"},{"location":"available_software/detail/patchelf/#available-modules","title":"Available modules","text":"The overview below shows which patchelf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using patchelf, load one of these modules using a module load
command like:
module load patchelf/0.18.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 patchelf/0.18.0-GCCcore-13.2.0 x x x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pixman/","title":"pixman","text":"Pixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.
http://www.pixman.org/
"},{"location":"available_software/detail/pixman/#available-modules","title":"Available modules","text":"The overview below shows which pixman installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pixman, load one of these modules using a module load
command like:
module load pixman/0.42.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pixman/0.42.2-GCCcore-12.3.0 x x x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/pkgconf/","title":"pkgconf","text":"pkgconf is a program which helps to configure compiler and linker flags for development libraries. It is similar to pkg-config from freedesktop.org.
https://github.com/pkgconf/pkgconf
"},{"location":"available_software/detail/pkgconf/#available-modules","title":"Available modules","text":"The overview below shows which pkgconf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pkgconf, load one of these modules using a module load
command like:
module load pkgconf/2.0.3-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x x x pkgconf/1.8.0 x x x x x x x x"},{"location":"available_software/detail/pkgconfig/","title":"pkgconfig","text":"pkgconfig is a Python module to interface with the pkg-config command line tool
https://github.com/matze/pkgconfig
"},{"location":"available_software/detail/pkgconfig/#available-modules","title":"Available modules","text":"The overview below shows which pkgconfig installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pkgconfig, load one of these modules using a module load
command like:
module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x x x"},{"location":"available_software/detail/poetry/","title":"poetry","text":"Python packaging and dependency management made easy. Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.
https://python-poetry.org
"},{"location":"available_software/detail/poetry/#available-modules","title":"Available modules","text":"The overview below shows which poetry installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using poetry, load one of these modules using a module load
command like:
module load poetry/1.6.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 poetry/1.6.1-GCCcore-13.2.0 x x x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/poetry/#poetry161-gcccore-1320","title":"poetry/1.6.1-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
attrs-23.1.0, build-0.10.0, cachecontrol-0.13.1, certifi-2023.7.22, charset-normalizer-3.3.1, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.6, html5lib-1.1, idna-3.4, importlib_metadata-6.8.0, installer-0.7.0, jaraco.classes-3.3.0, jeepney-0.8.0, jsonschema-4.17.3, keyring-24.2.0, lockfile-0.12.2, more-itertools-10.1.0, msgpack-1.0.7, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, poetry-1.6.1, poetry_core-1.7.0, poetry_plugin_export-1.5.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.20.0, rapidfuzz-2.15.2, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.4, six-1.16.0, tomlkit-0.12.1, urllib3-2.0.7, webencodings-0.5.1, zipp-3.17.0
"},{"location":"available_software/detail/poetry/#poetry151-gcccore-1230","title":"poetry/1.5.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
attrs-23.1.0, build-0.10.0, CacheControl-0.12.14, certifi-2023.5.7, charset-normalizer-3.1.0, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.5, html5lib-1.1, idna-3.4, importlib_metadata-6.7.0, installer-0.7.0, jaraco.classes-3.2.3, jeepney-0.8.0, jsonschema-4.17.3, keyring-23.13.1, lockfile-0.12.2, more-itertools-9.1.0, msgpack-1.0.5, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, poetry-1.5.1, poetry_core-1.6.1, poetry_plugin_export-1.4.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.19.3, rapidfuzz-2.15.1, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.0, six-1.16.0, tomlkit-0.11.8, urllib3-1.26.16, webencodings-0.5.1, zipp-3.15.0
"},{"location":"available_software/detail/protobuf-python/","title":"protobuf-python","text":"Python Protocol Buffers runtime library.
https://github.com/google/protobuf/
"},{"location":"available_software/detail/protobuf-python/#available-modules","title":"Available modules","text":"The overview below shows which protobuf-python installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using protobuf-python, load one of these modules using a module load
command like:
module load protobuf-python/4.24.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/protobuf/","title":"protobuf","text":"Protocol Buffers (a.k.a., protobuf) are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data.
https://github.com/protocolbuffers/protobuf
"},{"location":"available_software/detail/protobuf/#available-modules","title":"Available modules","text":"The overview below shows which protobuf installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using protobuf, load one of these modules using a module load
command like:
module load protobuf/24.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf/24.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pybind11/","title":"pybind11","text":"pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.
https://pybind11.readthedocs.io
"},{"location":"available_software/detail/pybind11/#available-modules","title":"Available modules","text":"The overview below shows which pybind11 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pybind11, load one of these modules using a module load
command like:
module load pybind11/2.11.1-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pybind11/2.11.1-GCCcore-13.2.0 x x x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/pytest-flakefinder/","title":"pytest-flakefinder","text":"Runs tests multiple times to expose flakiness.
https://github.com/dropbox/pytest-flakefinder
"},{"location":"available_software/detail/pytest-flakefinder/#available-modules","title":"Available modules","text":"The overview below shows which pytest-flakefinder installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-flakefinder, load one of these modules using a module load
command like:
module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pytest-rerunfailures/","title":"pytest-rerunfailures","text":"pytest plugin to re-run tests to eliminate flaky failures.
https://github.com/pytest-dev/pytest-rerunfailures
"},{"location":"available_software/detail/pytest-rerunfailures/#available-modules","title":"Available modules","text":"The overview below shows which pytest-rerunfailures installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-rerunfailures, load one of these modules using a module load
command like:
module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/pytest-shard/","title":"pytest-shard","text":"pytest plugin to support parallelism across multiple machines.Shards tests based on a hash of their test name enabling easy parallelism across machines,suitable for a wide variety of continuous integration services.Tests are split at the finest level of granularity, individual test cases,enabling parallelism even if all of your tests are in a single file(or even single parameterized test method).
https://github.com/AdamGleave/pytest-shard
"},{"location":"available_software/detail/pytest-shard/#available-modules","title":"Available modules","text":"The overview below shows which pytest-shard installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using pytest-shard, load one of these modules using a module load
command like:
module load pytest-shard/0.1.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/re2c/","title":"re2c","text":"re2c is a free and open-source lexer generator for C and C++. Its main goal is generatingfast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of usingtraditional table-driven approach, re2c encodes the generated finite state automata directly in the formof conditional jumps and comparisons.
https://re2c.org
"},{"location":"available_software/detail/re2c/#available-modules","title":"Available modules","text":"The overview below shows which re2c installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using re2c, load one of these modules using a module load
command like:
module load re2c/3.1-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 re2c/3.1-GCCcore-12.3.0 x x x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/scikit-build/","title":"scikit-build","text":"Scikit-Build, or skbuild, is an improved build system generatorfor CPython C/C++/Fortran/Cython extensions.
https://scikit-build.readthedocs.io/en/latest
"},{"location":"available_software/detail/scikit-build/#available-modules","title":"Available modules","text":"The overview below shows which scikit-build installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using scikit-build, load one of these modules using a module load
command like:
module load scikit-build/0.17.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1320","title":"scikit-build/0.17.6-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
distro-1.8.0, packaging-23.1, scikit_build-0.17.6
"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1230","title":"scikit-build/0.17.6-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
distro-1.8.0, packaging-23.1, scikit_build-0.17.6
"},{"location":"available_software/detail/scikit-learn/","title":"scikit-learn","text":"Scikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world,building upon numpy, scipy, and matplotlib. As a machine-learning module,it provides versatile tools for data mining and analysis in any field of science and engineering.It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.
https://scikit-learn.org/stable/index.html
"},{"location":"available_software/detail/scikit-learn/#available-modules","title":"Available modules","text":"The overview below shows which scikit-learn installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using scikit-learn, load one of these modules using a module load
command like:
module load scikit-learn/1.3.1-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-learn/1.3.1-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/scikit-learn/#scikit-learn131-gfbf-2023a","title":"scikit-learn/1.3.1-gfbf-2023a","text":"This is a list of extensions included in the module:
scikit-learn-1.3.1, sklearn-0.0
"},{"location":"available_software/detail/setuptools-rust/","title":"setuptools-rust","text":"setuptools-rust is a plugin for setuptools to build Rust Python extensionsimplemented with PyO3 or rust-cpython.
https://github.com/PyO3/setuptools-rust
"},{"location":"available_software/detail/setuptools-rust/#available-modules","title":"Available modules","text":"The overview below shows which setuptools-rust installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using setuptools-rust, load one of these modules using a module load
command like:
module load setuptools-rust/1.8.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust180-gcccore-1320","title":"setuptools-rust/1.8.0-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
semantic_version-2.10.0, setuptools-rust-1.8.0, typing_extensions-4.8.0
"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust160-gcccore-1230","title":"setuptools-rust/1.6.0-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
semantic_version-2.10.0, setuptools-rust-1.6.0, typing_extensions-4.6.3
"},{"location":"available_software/detail/siscone/","title":"siscone","text":"Hadron Seedless Infrared-Safe Cone jet algorithm
https://siscone.hepforge.org/
"},{"location":"available_software/detail/siscone/#available-modules","title":"Available modules","text":"The overview below shows which siscone installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using siscone, load one of these modules using a module load
command like:
module load siscone/3.0.6-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 siscone/3.0.6-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/snakemake/","title":"snakemake","text":"The Snakemake workflow management system is a tool to create reproducible and scalable data analyses.
https://snakemake.readthedocs.io
"},{"location":"available_software/detail/snakemake/#available-modules","title":"Available modules","text":"The overview below shows which snakemake installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using snakemake, load one of these modules using a module load
command like:
module load snakemake/8.4.2-foss-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 snakemake/8.4.2-foss-2023a x x x x x x x x"},{"location":"available_software/detail/snakemake/#snakemake842-foss-2023a","title":"snakemake/8.4.2-foss-2023a","text":"This is a list of extensions included in the module:
argparse-dataclass-2.0.0, conda-inject-1.3.1, ConfigArgParse-1.7, connection-pool-0.0.3, datrie-0.8.2, dpath-2.1.6, fastjsonschema-2.19.1, humanfriendly-10.0, immutables-0.20, jupyter-core-5.7.1, nbformat-5.9.2, plac-1.4.2, reretry-0.11.8, smart-open-6.4.0, snakemake-8.4.2, snakemake-executor-plugin-cluster-generic-1.0.7, snakemake-executor-plugin-cluster-sync-0.1.3, snakemake-executor-plugin-flux-0.1.0, snakemake-executor-plugin-slurm-0.2.1, snakemake-executor-plugin-slurm-jobstep-0.1.10, snakemake-interface-common-1.15.2, snakemake-interface-executor-plugins-8.2.0, snakemake-interface-storage-plugins-3.0.0, stopit-1.1.2, throttler-1.2.2, toposort-1.10, yte-1.5.4
"},{"location":"available_software/detail/snappy/","title":"snappy","text":"Snappy is a compression/decompression library. It does not aimfor maximum compression, or compatibility with any other compression library;instead, it aims for very high speeds and reasonable compression.
https://github.com/google/snappy
"},{"location":"available_software/detail/snappy/#available-modules","title":"Available modules","text":"The overview below shows which snappy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using snappy, load one of these modules using a module load
command like:
module load snappy/1.1.10-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 snappy/1.1.10-GCCcore-12.3.0 x x x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/sympy/","title":"sympy","text":"SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries.
https://sympy.org/
"},{"location":"available_software/detail/sympy/#available-modules","title":"Available modules","text":"The overview below shows which sympy installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using sympy, load one of these modules using a module load
command like:
module load sympy/1.12-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 sympy/1.12-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/tbb/","title":"tbb","text":"Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.
https://github.com/oneapi-src/oneTBB
"},{"location":"available_software/detail/tbb/#available-modules","title":"Available modules","text":"The overview below shows which tbb installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tbb, load one of these modules using a module load
command like:
module load tbb/2021.11.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tbb/2021.11.0-GCCcore-12.3.0 - - - x x x x x"},{"location":"available_software/detail/tcsh/","title":"tcsh","text":"Tcsh is an enhanced, but completely compatible version of the Berkeley UNIX C shell (csh). It is a command language interpreter usable both as an interactive login shell and a shell script command processor. It includes a command-line editor, programmable word completion, spelling correction, a history mechanism, job control and a C-like syntax.
https://www.tcsh.org
"},{"location":"available_software/detail/tcsh/#available-modules","title":"Available modules","text":"The overview below shows which tcsh installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tcsh, load one of these modules using a module load
command like:
module load tcsh/6.24.07-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tcsh/6.24.07-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/time/","title":"time","text":"The `time' command runs another program, then displays information about the resources used by that program, collected by the system while the program was running.
https://www.gnu.org/software/time/
"},{"location":"available_software/detail/time/#available-modules","title":"Available modules","text":"The overview below shows which time installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using time, load one of these modules using a module load
command like:
module load time/1.9-GCCcore-12.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 time/1.9-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/tornado/","title":"tornado","text":"Tornado is a Python web framework and asynchronous networking library.
https://github.com/tornadoweb/tornado
"},{"location":"available_software/detail/tornado/#available-modules","title":"Available modules","text":"The overview below shows which tornado installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using tornado, load one of these modules using a module load
command like:
module load tornado/6.3.2-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 tornado/6.3.2-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/typing-extensions/","title":"typing-extensions","text":"Typing Extensions \u2013 Backported and Experimental Type Hints for Python
https://github.com/python/typing/blob/master/typing_extensions/README.rst
"},{"location":"available_software/detail/typing-extensions/#available-modules","title":"Available modules","text":"The overview below shows which typing-extensions installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using typing-extensions, load one of these modules using a module load
command like:
module load typing-extensions/4.9.0-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/virtualenv/","title":"virtualenv","text":"A tool for creating isolated virtual python environments.
https://github.com/pypa/virtualenv
"},{"location":"available_software/detail/virtualenv/#available-modules","title":"Available modules","text":"The overview below shows which virtualenv installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using virtualenv, load one of these modules using a module load
command like:
module load virtualenv/20.24.6-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/virtualenv/#virtualenv20246-gcccore-1320","title":"virtualenv/20.24.6-GCCcore-13.2.0","text":"This is a list of extensions included in the module:
distlib-0.3.7, filelock-3.13.0, platformdirs-3.11.0, virtualenv-20.24.6
"},{"location":"available_software/detail/virtualenv/#virtualenv20231-gcccore-1230","title":"virtualenv/20.23.1-GCCcore-12.3.0","text":"This is a list of extensions included in the module:
distlib-0.3.6, filelock-3.12.2, platformdirs-3.8.0, virtualenv-20.23.1
"},{"location":"available_software/detail/waLBerla/","title":"waLBerla","text":"Widely applicable Lattics-Boltzmann from Erlangen is a block-structured high-performance framework for multiphysics simulations
https://walberla.net/index.html
"},{"location":"available_software/detail/waLBerla/#available-modules","title":"Available modules","text":"The overview below shows which waLBerla installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using waLBerla, load one of these modules using a module load
command like:
module load waLBerla/6.1-foss-2022b\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 waLBerla/6.1-foss-2022b x x x x x x x x"},{"location":"available_software/detail/wget/","title":"wget","text":"GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.
https://www.gnu.org/software/wget
"},{"location":"available_software/detail/wget/#available-modules","title":"Available modules","text":"The overview below shows which wget installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using wget, load one of these modules using a module load
command like:
module load wget/1.21.4-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 wget/1.21.4-GCCcore-13.2.0 x x x x x x x x"},{"location":"available_software/detail/wrapt/","title":"wrapt","text":"The aim of the wrapt module is to provide a transparent objectproxy for Python, which can be used as the basis for the construction offunction wrappers and decorator functions.
https://pypi.org/project/wrapt/
"},{"location":"available_software/detail/wrapt/#available-modules","title":"Available modules","text":"The overview below shows which wrapt installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using wrapt, load one of these modules using a module load
command like:
module load wrapt/1.15.0-gfbf-2023a\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 wrapt/1.15.0-gfbf-2023a x x x x x x x x"},{"location":"available_software/detail/wrapt/#wrapt1150-gfbf-2023a","title":"wrapt/1.15.0-gfbf-2023a","text":"This is a list of extensions included in the module:
wrapt-1.15.0
"},{"location":"available_software/detail/x264/","title":"x264","text":"x264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.
https://www.videolan.org/developers/x264.html
"},{"location":"available_software/detail/x264/#available-modules","title":"Available modules","text":"The overview below shows which x264 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using x264, load one of these modules using a module load
command like:
module load x264/20230226-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 x264/20230226-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/x265/","title":"x265","text":"x265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.
https://x265.org/
"},{"location":"available_software/detail/x265/#available-modules","title":"Available modules","text":"The overview below shows which x265 installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using x265, load one of these modules using a module load
command like:
module load x265/3.5-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 x265/3.5-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/xorg-macros/","title":"xorg-macros","text":"X.org macros utilities.
https://gitlab.freedesktop.org/xorg/util/macros
"},{"location":"available_software/detail/xorg-macros/#available-modules","title":"Available modules","text":"The overview below shows which xorg-macros installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using xorg-macros, load one of these modules using a module load
command like:
module load xorg-macros/1.20.0-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x x x"},{"location":"available_software/detail/xxd/","title":"xxd","text":"xxd is part of the VIM package and this will only install xxd, not vim!xxd converts to/from hexdumps of binary files.
https://www.vim.org
"},{"location":"available_software/detail/xxd/#available-modules","title":"Available modules","text":"The overview below shows which xxd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using xxd, load one of these modules using a module load
command like:
module load xxd/9.0.2112-GCCcore-12.3.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxd/9.0.2112-GCCcore-12.3.0 x x x x x x x x"},{"location":"available_software/detail/zstd/","title":"zstd","text":"Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.
https://facebook.github.io/zstd
"},{"location":"available_software/detail/zstd/#available-modules","title":"Available modules","text":"The overview below shows which zstd installations are available per HPC-UGent Tier-2cluster, ordered based on software version (new to old).
To start using zstd, load one of these modules using a module load
command like:
module load zstd/1.5.5-GCCcore-13.2.0\n
(This data was automatically generated on Tue, 12 Mar 2024 at 18:02:07 CET)
aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/intel/haswell x86_64/intel/skylake_avx512 zstd/1.5.5-GCCcore-13.2.0 x x x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x x x"},{"location":"blog/","title":"Blog","text":""},{"location":"blog/2024/05/17/isc24/","title":"EESSI promo tour @ ISC'24 (May 2024, Hamburg)","text":"This week, we had the privilege of attending the ISC'24 conference in the beautiful city of Hamburg, Germany. This was an excellent opportunity for us to showcase EESSI, and gain valuable insights and feedback from the HPC community.
"},{"location":"blog/2024/05/17/isc24/#bof-session-on-eessi","title":"BoF session on EESSI","text":"The EESSI Birds-of-a-Feather (BoF) session on Tuesday morning, part of the official ISC'24 program, was the highlight of our activities in Hamburg.
It was well attended, with well over 100 people joining us at 9am.
During this session, we introduced the EESSI project with a short presentation, followed by a well-received live hands-on demo of installing and using EESSI by spinning up an \"empty\" Linux virtual machine instance in Amazon EC2 and getting optimized installations of popular scientific applications like GROMACS and TensorFlow running in a matter of minutes.
During the second part of the BoF session, we engaged with the audience through an interactive poll and by letting attendees ask questions.
The presentation slides, including the results of the interactive poll and questions that were raised by attendees, are available here.
"},{"location":"blog/2024/05/17/isc24/#workshops","title":"Workshops","text":"During the last day of ISC'24, EESSI was present in no less than three different workshops.
"},{"location":"blog/2024/05/17/isc24/#risc-v-workshop","title":"RISC-V workshop","text":"At the Fourth International workshop on RISC-V for HPC, Juli\u00e1n Morillo (BSC) presented our paper \"Preparing to Hit the Ground Running: Adding RISC-V support to EESSI\" (slides available here).
Juli\u00e1n covered the initial work that was done in the scope of the MultiXscale EuroHPC Centre-of-Excellence to add support for RISC-V to EESSI, outlined the challenges we encountered, and shared the lessons we have learned along the way.
"},{"location":"blog/2024/05/17/isc24/#ahug-workshop","title":"AHUG workshop","text":"During the Arm HPC User Group (AHUG) workshop, Kenneth Hoste (HPC-UGent) gave a talk entitled \"Extending Arm\u2019s Reach by Going EESSI\" (slides available here).
Next to a high-level introduction to EESSI, we briefly covered some of the challenges we encountered when testing the optimized software installations that we had built for the Arm Neoverse V1 microarchitecture, including bugs in OpenMPI and GROMACS.
Kenneth gave a live demonstration of how to get access to EESSI and start running the optimized software installations we provide through our CernVM-FS repository on a fresh AWS Graviton 3 instance in a matter of minutes.
"},{"location":"blog/2024/05/17/isc24/#pop-workshop","title":"POP workshop","text":"In the afternoon on Thursday, Lara Peeters (HPC-UGent) presented MultiXscale during the Readiness of HPC Extreme-scale Applications workshop, which was organised by the POP EuroHPC Centre-of-Excellence (slides available here).
Lara outlined the pilot use cases on which MultiXscale focuses, and explained how EESSI helps to achieve the goals of MultiXscale in terms of Productivity, Performance, and Portability.
At the end of the workshop, a group picture was taken with both organisers and speakers, which was a great way to wrap up a busy week in Hamburg!
"},{"location":"blog/2024/05/17/isc24/#talks-and-demos-on-eessi-at-exhibit","title":"Talks and demos on EESSI at exhibit","text":"Not only was EESSI part of the official ISC'24 program via a dedicated BoF session and various workshops: we were also prominently present on the exhibit floor.
"},{"location":"blog/2024/05/17/isc24/#microsoft-azure-booth","title":"Microsoft Azure booth","text":"Microsoft Azure invited us to give a 1-hour introductory presentation on EESSI on both Monday and Wednesday at their booth during the ISC'24 exhibit, as well as to provide live demonstrations at the demo corner of their booth on Tuesday afternoon on how to get access to EESSI and the user experience it provides.
Exhibit attendees were welcome to pass by and ask questions, and did so throughout the full 4 hours we were present there.
Both Microsoft Azure and AWS have been graciously providing resources in their cloud infrastructure free-of-cost for developing, testing, and demonstrating EESSI for several years now.
"},{"location":"blog/2024/05/17/isc24/#eurohpc-booth","title":"EuroHPC booth","text":"The MultiXscale EuroHPC Centre-of-Excellence we are actively involved in, and through which the development of EESSI is being co-funded since Jan'23, was invited by the EuroHPC JU to present the goals and preliminary achievements at their booth.
Elisabeth Ortega (HPCNow!) did the honours to give the last talk at the EuroHPC JU booth of the ISC'24 exhibit.
"},{"location":"blog/2024/05/17/isc24/#stickers","title":"Stickers!","text":"Last but not least: we handed out a boatload free stickers with the logo of both MultiXscale and EESSI itself, as well as of various of the open source software projects we leverage, including EasyBuild, Lmod, and CernVM-FS.
We have mostly exhausted our sticker collection during ISC'24, but don't worry: we will make sure we have more available at upcoming events...
"},{"location":"filesystem_layer/stratum1/","title":"Setting up a Stratum 1","text":"Setting up a Stratum 1 involves the following steps:
- set up the Stratum 1, preferably by running the Ansible playbook that we provide;
- request a Stratum 0 firewall exception for your Stratum 1 server;
- request a
<your site>.stratum1.cvmfs.eessi-infra.org
DNS entry; - open a pull request to include the URL to your Stratum 1 in the EESSI configuration.
The last two steps can be skipped if you want to host a \"private\" Stratum 1 for your site.
"},{"location":"filesystem_layer/stratum1/#requirements-for-a-stratum-1","title":"Requirements for a Stratum 1","text":"The main requirements for a Stratum 1 server are a good network connection to the clients it is going to serve, and sufficient disk space. For the EESSI repository, a few hundred gigabytes should suffice, but for production environments at least 1 TB would be recommended.
In terms of cores and memory, a machine with just a few (~4) cores and 4-8 GB of memory should suffice.
Various Linux distributions are supported, but we recommend one based on RHEL 7 or 8.
Finally, make sure that ports 80 (for the Apache web server) and 8000 are open.
"},{"location":"filesystem_layer/stratum1/#step-1-set-up-the-stratum-1","title":"Step 1: set up the Stratum 1","text":"The recommended way for setting up an EESSI Stratum 1 is by running the Ansible playbook stratum1.yml
from the filesystem-layer repository on GitHub.
Installing a Stratum 1 requires a GEO API license key, which will be used to find the (geographically) closest Stratum 1 server for your client and proxies. More information on how to (freely) obtain this key is available in the CVMFS documentation: https://cvmfs.readthedocs.io/en/stable/cpt-replica.html#geo-api-setup.
You can put your license key in the local configuration file inventory/local_site_specific_vars.yml
.
Furthermore, the Stratum 1 runs a Squid server. The template configuration file can be found at templates/eessi_stratum1_squid.conf.j2
. If you want to customize it, for instance for limiting the access to the Stratum 1, you can make your own version of this template file and point to it by setting local_stratum1_cvmfs_squid_conf_src
in inventory/local_site_specific_vars.yml
. See the comments in the example file for more details.
Start by installing Ansible:
sudo yum install -y ansible\n
Then install Ansible roles for EESSI:
ansible-galaxy role install -r requirements.yml -p ./roles --force\n
Make sure you have enough space in /srv
(on the Stratum 1) since the snapshot of the Stratum 0 will end up there by default. To alter the directory where the snapshot gets copied to you can add this variable in inventory/host_vars/<url-or-ip-to-your-stratum1>
:
cvmfs_srv_mount: /srv\n
Make sure that you have added the hostname or IP address of your server to the inventory/hosts
file. Finally, install the Stratum 1 using one of the two following options.
Option 1:
# -b to run as root, optionally use -K if a sudo password is required\nansible-playbook -b [-K] -e @inventory/local_site_specific_vars.yml stratum1.yml\n
Option2:
Create a ssh key pair and make sure the ansible-host-keys.pub
is in the $HOME/.ssh/authorized_keys
file on your Stratum 1 server.
ssh-keygen -b 2048 -t rsa -f ~/.ssh/ansible-host-keys -q -N \"\"\n
Then run the playbook:
ansible-playbook -b --private-key ~/.ssh/ansible-host-keys -e @inventory/local_site_specific_vars.yml stratum1.yml\n
Running the playbook will automatically make replicas of all the repositories defined in group_vars/all.yml
.
"},{"location":"filesystem_layer/stratum1/#step-2-request-a-firewall-exception","title":"Step 2: request a firewall exception","text":"(This step is not implemented yet and can be skipped)
You can request a firewall exception rule to be added for your Stratum 1 server by opening an issue on the GitHub page of the filesystem layer repository.
Make sure to include the IP address of your server.
"},{"location":"filesystem_layer/stratum1/#step-3-verification-of-the-stratum-1","title":"Step 3: Verification of the Stratum 1","text":"When the playbook has finished your Stratum 1 should be ready. In order to test your Stratum 1, even without a client installed, you can use curl
.
curl --head http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io/.cvmfspublished\n
This should return: HTTP/1.1 200 OK\n...\nX-Cache: MISS from <url-or-ip-to-your-stratum1>\n
The second time you run it, you should get a cache hit:
X-Cache: HIT from <url-or-ip-to-your-stratum1>\n
Example with the Norwegian Stratum 1:
curl --head http://bgo-no.stratum1.cvmfs.eessi-infra.org/cvmfs/software.eessi.io/.cvmfspublished\n
You can also test access to your Stratum 1 from a client, for which you will have to install the CVMFS client.
Then run the following command to add your newly created Stratum 1 to the existing list of EESSI Stratum 1 servers by creating a local CVMFS configuration file:
echo 'CVMFS_SERVER_URL=\"http://<url-or-ip-to-your-stratum1>/cvmfs/@fqrn@;$CVMFS_SERVER_URL\"' | sudo tee -a /etc/cvmfs/domain.d/eessi-hpc.org.local\n
If this is the first time you set up the client you now run:
sudo cvmfs_config setup\n
If you already had configured the client before, you can simply reload the config:
sudo cvmfs_config reload -c software.eessi.io\n
Finally, verify that the client connects to your new Stratum 1 by running:
cvmfs_config stat -v software.eessi.io\n
Assuming that your new Stratum 1 is the geographically closest one to your client, this should return:
Connection: http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io through proxy DIRECT (online)\n
"},{"location":"filesystem_layer/stratum1/#step-4-request-an-eessi-dns-name","title":"Step 4: request an EESSI DNS name","text":"In order to keep the configuration clean and easy, all the EESSI Stratum 1 servers have a DNS name <your site>.stratum1.cvmfs.eessi-infra.org
, where <your site>
is often a short name or abbreviation followed by the country code (e.g. rug-nl
or bgo-no
). You can request this for your Stratum 1 by mentioning this in the issue that you created in Step 2, or by opening another issue.
"},{"location":"filesystem_layer/stratum1/#step-5-include-your-stratum-1-in-the-eessi-configuration","title":"Step 5: include your Stratum 1 in the EESSI configuration","text":"If you want to include your Stratum 1 in the EESSI configuration, i.e. allow any (nearby) client to be able to use it, you can open a pull request with updated configuration files. You will only have to add the URL to your Stratum 1 to the urls
list of the eessi_cvmfs_server_urls
variable in the all.yml
file.
"},{"location":"getting_access/eessi_container/","title":"EESSI container script","text":"The eessi_container.sh
script provides a very easy yet versatile means to access EESSI. It is the preferred method to start an EESSI container as it has support for many different scenarios via various options.
This page guides you through several example scenarios illustrating the use of the script.
"},{"location":"getting_access/eessi_container/#prerequisites","title":"Prerequisites","text":" - Apptainer 1.0.0 (or newer), or Singularity 3.7.x
- Check with
apptainer --version
or singularity --version
- Support for the
--fusemount
option in the shell
and run
subcommands is required
- Git
- Check with
git --version
"},{"location":"getting_access/eessi_container/#preparation","title":"Preparation","text":"Clone the EESSI/software-layer
repository and change into the software-layer
directory by running these commands:
git clone https://github.com/EESSI/software-layer.git\ncd software-layer\n
"},{"location":"getting_access/eessi_container/#quickstart","title":"Quickstart","text":"Run the eessi_container
script (from the software-layer
directory) to start a shell session in the EESSI container:
./eessi_container.sh\n
Note
Startup will take a bit longer the first time you run this because the container image is downloaded and converted.
You should see output like
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nApptainer> CernVM-FS: loading Fuse module... done\nCernVM-FS: loading Fuse module... done\n\nApptainer>\n
Note
You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS:
have been printed after the first prompt Apptainer>
was shown.
To start using EESSI, see Using EESSI/Setting up your environment.
"},{"location":"getting_access/eessi_container/#help-for-eessi_containersh","title":"Help for eessi_container.sh
","text":"The example in the Quickstart section facilitates an interactive session with read access to the EESSI software stack. It does not require any command line options, because the script eessi_container.sh
uses some carefully chosen defaults. To view all options of the script and its default values, run the command
./eessi_container.sh --help\n
You should see the following output usage: ./eessi_container.sh [OPTIONS] [[--] SCRIPT or COMMAND]\n OPTIONS:\n -a | --access {ro,rw} - ro (read-only), rw (read & write) [default: ro]\n -c | --container IMG - image file or URL defining the container to use\n [default: docker://ghcr.io/eessi/build-node:debian11]\n -g | --storage DIR - directory space on host machine (used for\n temporary data) [default: 1. TMPDIR, 2. /tmp]\n -h | --help - display this usage information [default: false]\n -i | --host-injections - directory to link to for host_injections \n [default: /..storage../opt-eessi]\n -l | --list-repos - list available repository identifiers [default: false]\n -m | --mode MODE - with MODE==shell (launch interactive shell) or\n MODE==run (run a script or command) [default: shell]\n -n | --nvidia MODE - configure the container to work with NVIDIA GPUs,\n MODE==install for a CUDA installation, MODE==run to\n attach a GPU, MODE==all for both [default: false]\n -r | --repository CFG - configuration file or identifier defining the\n repository to use [default: EESSI via\n container configuration]\n -u | --resume DIR/TGZ - resume a previous run from a directory or tarball,\n where DIR points to a previously used tmp directory\n (check for output 'Using DIR as tmp ...' of a previous\n run) and TGZ is the path to a tarball which is\n unpacked the tmp dir stored on the local storage space\n (see option --storage above) [default: not set]\n -s | --save DIR/TGZ - save contents of tmp directory to a tarball in\n directory DIR or provided with the fixed full path TGZ\n when a directory is provided, the format of the\n tarball's name will be {REPO_ID}-{TIMESTAMP}.tgz\n [default: not set]\n -v | --verbose - display more information [default: false]\n -x | --http-proxy URL - provides URL for the env variable http_proxy\n [default: not set]; uses env var $http_proxy if set\n -y | --https-proxy URL - provides URL for the env variable https_proxy\n [default: not set]; uses env var $https_proxy if set\n\n If value for --mode is 'run', the SCRIPT/COMMAND provided is executed. If\n arguments to the script/command start with '-' or '--', use the flag terminator\n '--' to let eessi_container.sh stop parsing arguments.\n
So, the defaults are equal to running the command
./eessi_container.sh --access ro --container docker://ghcr.io/eessi/build-node:debian11 --mode shell --repository EESSI\n
and it would either create a temporary directory under ${TMPDIR}
(if defined), or /tmp
(if ${TMPDIR}
is not defined). The remainder of this page will demonstrate different scenarios using some of the command line options used for read-only access.
Other options supported by the script will be discussed in a yet-to-be written section covering building software to be added to the EESSI stack.
"},{"location":"getting_access/eessi_container/#resuming-a-previous-session","title":"Resuming a previous session","text":"You may have noted the following line in the output of eessi_container.sh
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\n
Note
The parameter after --resume
(/tmp/eessi.abc123defg
) will be different when you run eessi_container.sh
.
Scroll back in your terminal and copy it so you can pass it to --resume
.
Try the following command to \"resume\" from the last session.
./eessi_container.sh --resume /tmp/eessi.abc123defg\n
This should run much faster because the container image has been cached in the temporary directory (/tmp/eessi.abc123defg
). You should get to the prompt (Apptainer>
or Singularity>
) and can use EESSI with the state where you left the previous session. Note
The state refers to what was stored on disk, not what was changed in memory. Particularly, any environment (variable) settings are not restored automatically.
Because the /tmp/eessi.abc123defg
directory contains a home
directory which includes the saved history of your last session, you can easily restore the environment (variable) settings. Type history
to see which commands you ran. You should be able to access the history as you would do in a normal terminal session.
"},{"location":"getting_access/eessi_container/#running-a-simple-command","title":"Running a simple command","text":"Let's \"ls /cvmfs/software.eessi.io
\" through the eessi_container.sh
script to check if the CernVM-FS EESSI repository is accessible:
./eessi_container.sh --mode run ls /cvmfs/software.eessi.io\n
You should see an output such as
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nhost_injections latest versions\n
Note that this time no interactive shell session is started in the container: only the provided command is run in the container, and when that finishes you are back in the shell session where you ran the eessi_container.sh
script.
This is because we used the --mode run
command line option.
Note
The last line in the output is the output of the ls
command, which shows the contents of the /cvmfs/software.eessi.io
directory.
Also, note that there is no shell prompt (Apptainer>
or Singularity
), since no interactive shell session is started in the container.
Alternatively to specify the command as we did above, you can also do the following.
CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh --mode shell <<< ${CMD}\n
Note
We changed the mode from run
to shell
because we use a different method to let the script run our command, by feeding it in via the stdin
input channel using <<<
.
Because shell
is the default value for --mode
we can also omit this and simply run
CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
"},{"location":"getting_access/eessi_container/#running-a-script","title":"Running a script","text":"While running simple command can be sufficient in some cases, you often want to run scripts containing multiple commands.
Let's run the script shown below.
First, copy-paste the contents for the script shown below, and create a file named eessi_architectures.sh
in your current directory. Also make the script executable, by running:
chmod +x eessi_architectures.sh\n
Here are the contents for the eessi_architectures.sh
script:
#!/usr/bin/env bash\n#\n# This script determines which architectures are included in the\n# latest EESSI version. It makes use of the specific directory\n# structure in the EESSI repository.\n#\n\n# determine list of available OS types\nBASE=${EESSI_CVMFS_REPO:-/cvmfs/software.eessi.io}/latest/software\ncd ${BASE}\nfor os_type in $(ls -d *)\ndo\n # determine architecture families\n OS_BASE=${BASE}/${os_type}\n cd ${OS_BASE}\n for arch_family in $(ls -d *)\n do\n # determine CPU microarchitectures\n OS_ARCH_BASE=${BASE}/${os_type}/${arch_family}\n cd ${OS_ARCH_BASE}\n for microarch in $(ls -d *)\n do\n case ${microarch} in\n amd | intel )\n for sub in $(ls ${microarch})\n do\n echo \"${os_type}/${arch_family}/${microarch}/${sub}\"\n done\n ;;\n * )\n echo \"${os_type}/${arch_family}/${microarch}\"\n ;;\n esac\n done\n done\ndone\n
Run the script as follows ./eessi_container.sh --mode shell < eessi_architectures.sh\n
The output should be similar to Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nlinux/aarch64/generic\nlinux/aarch64/graviton2\nlinux/aarch64/graviton3\nlinux/ppc64le/generic\nlinux/ppc64le/power9le\nlinux/x86_64/amd/zen2\nlinux/x86_64/amd/zen3\nlinux/x86_64/generic\nlinux/x86_64/intel/haswell\nlinux/x86_64/intel/skylake_avx512\n
Lines 6 to 15 show the output of the script eessi_architectures.sh
. If you want to use the mode run
, you have to make the script's location available inside the container.
This can be done by mapping the current directory (${PWD}
), which contains eessi_architectures.sh
, to any not-yet existing directory inside the container using the $SINGULARITY_BIND
or $APPTAINER_BIND
environment variable.
For example:
SINGULARITY_BIND=${PWD}:/scripts ./eessi_container.sh --mode run /scripts/eessi_architectures.sh\n
"},{"location":"getting_access/eessi_container/#running-scripts-or-commands-with-parameters-starting-with-or-","title":"Running scripts or commands with parameters starting with -
or --
","text":"Let's assume we would like to get more information about the entries of /cvmfs/software.eessi.io
. If we would just run
./eessi_container.sh --mode run ls -lH /cvmfs/software.eessi.io\n
we would get an error message such as ERROR: Unknown option: -lH\n
We can resolve this in two ways: - Using the
stdin
channel as described above, for example, by simply running CMD=\"ls -lH /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
which should result in the output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user 10 Jun 30 2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user 16 May 4 2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10 2021 versions\n
- Using the flag terminator
--
which tells eessi_container.sh
to stop parsing command line arguments. For example, ./eessi_container.sh --mode run -- ls -lH /cvmfs/software.eessi.io\n
which should result in the output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q run --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif ls -lH /cvmfs/software.eessi.io\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user 10 Jun 30 2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user 16 May 4 2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10 2021 versions\n
"},{"location":"getting_access/eessi_container/#running-eessi-demos","title":"Running EESSI demos","text":"For examples of scripts that use the software provided by EESSI, see Running EESSI demos.
"},{"location":"getting_access/eessi_container/#launching-containers-more-quickly","title":"Launching containers more quickly","text":"Subsequent runs of eessi_container.sh
may reuse temporary data of a previous session, which includes the pulled image of the container. However, that is not always what we want, i.e., reusing a previous session (and thereby launching the container more quickly).
The eessi_container.sh
script may (re)-use a cache directory provided via $SINGULARITY_CACHEDIR
(or $APPTAINER_CACHEDIR
when using Apptainer). Hence, the container image does not have to be downloaded again even when starting a new session. The example below illustrates this.
export SINGULARITY_CACHEDIR=${PWD}/container_cache_dir\ntime ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
which should produce output similar to Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections latest versions\n\nreal m40.445s\nuser 3m2.621s\nsys 0m7.402s\n
The next run using the same cache directory, e.g., by simply executing time ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
is much faster Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections latest versions\n\nreal 0m2.781s\nuser 0m0.172s\nsys 0m0.436s\n
Note
Each run of eessi_container.sh
(without specifying --resume
) creates a new temporary directory. The temporary directory stores, among other data, the image file of the container. Thus we can ensure that the container is available locally for a subsequent run.
However, this may quickly consume scarce resources, for example, a small partition where /tmp
is located (default for temporary storage, see --help
for specifying a different location).
See next section for making sure to clean up no longer needed temporary data.
"},{"location":"getting_access/eessi_container/#reducing-disk-usage","title":"Reducing disk usage","text":"By default eessi_container.sh
creates a temporary directory under /tmp
. The directories are named eessi.RANDOM
where RANDOM
is a 10-character string. The script does not automatically remove these directories. To determine their total disk usage, simply run
du -sch /tmp/eessi.*\n
which could result in output similar to 333M /tmp/eessi.session123\n333M /tmp/eessi.session456\n333M /tmp/eessi.session789\n997M total\n
Clean up disk usage by simply removing directories you do not need any longer."},{"location":"getting_access/eessi_container/#eessi-container-image","title":"EESSI container image","text":"If you would like to directly use an EESSI container image, you can do so by configuring apptainer
to correctly mount the CVMFS repository:
# honor $TMPDIR if it is already defined, use /tmp otherwise\nif [ -z $TMPDIR ]; then\n export WORKDIR=/tmp/$USER\nelse\n export WORKDIR=$TMPDIR/$USER\nfi\n\nmkdir -p ${WORKDIR}/{var-lib-cvmfs,var-run-cvmfs,home}\nexport SINGULARITY_BIND=\"${WORKDIR}/var-run-cvmfs:/var/run/cvmfs,${WORKDIR}/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"${WORKDIR}/home:/home/$USER\"\nexport EESSI_REPO=\"container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io\"\nexport EESSI_CONTAINER=\"docker://ghcr.io/eessi/client:centos7\"\nsingularity shell --fusemount \"$EESSI_REPO\" \"$EESSI_CONTAINER\"\n
"},{"location":"getting_access/is_eessi_accessible/","title":"Is EESSI accessible?","text":"EESSI can be accessed via a native (CernVM-FS) installation, or via a container that includes CernVM-FS.
Before you look into these options, check if EESSI is already accessible on your system.
Run the following command:
ls /cvmfs/software.eessi.io\n
Note
This ls
command may take a couple of seconds to finish, since CernVM-FS may need to download or update the metadata for that directory.
If you see output like shown below, you already have access to EESSI on your system.
host_injections latest versions\n
For starting to use EESSI, continue reading about Setting up environment.
If you see an error message as shown below, EESSI is not yet accessible on your system.
ls: /cvmfs/software.eessi.io: No such file or directory\n
No worries, you don't need to be a to get access to EESSI. Continue reading about the Native installation of EESSI, or access via the EESSI container.
"},{"location":"getting_access/native_installation/","title":"Native installation","text":"Setting up native access to EESSI, that is a system-wide deployment that does not require workarounds like using a container, requires the installation and configuration of CernVM-FS.
This requires admin privileges, since you need to install CernVM-FS as an OS package.
The following actions must be taken for a (basic) native installation of EESSI:
- Installing CernVM-FS itself, ideally using the OS packages provided by the CernVM-FS project (although installing from source is also possible);
- Installing the EESSI configuration for CernVM-FS, which can be done by installing the
cvmfs-config-eessi
package that we provide for the most popular Linux distributions (more information available here); - Creating a small client configuration file for CernVM-FS (
/etc/cvmfs/default.local
); see also the CernVM-FS documentation.
The good news is that all of this only requires a handful commands :
RHEL-based Linux distributionsDebian-based Linux distributions # Installation commands for RHEL-based distros like CentOS, Rocky Linux, Almalinux, Fedora, ...\n\n# install CernVM-FS\nsudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\nsudo yum install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nsudo yum install -y https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi-latest.noarch.rpm\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
Note
The commands above only cover the basic installation of EESSI.
This is good enough for an individual client, or for testing purposes, but for a production-quality setup you should also set up a Squid proxy cache.
For large-scale systems, like an HPC cluster, you should also consider setting up your own CernVM-FS Stratum-1 mirror server.
For more details on this, please refer to the Stratum 1 and proxies section of the CernVM-FS tutorial.
"},{"location":"known_issues/eessi-2023.06/","title":"Known issues","text":""},{"location":"known_issues/eessi-2023.06/#eessi-production-repository-v202306","title":"EESSI Production Repository (v2023.06)","text":""},{"location":"known_issues/eessi-2023.06/#failed-to-modify-ud-qp-to-init-on-mlx5_0-operation-not-permitted","title":"Failed to modify UD QP to INIT on mlx5_0: Operation not permitted
","text":"This is an error that occurs with OpenMPI after updating to OFED 23.10.
Their is an upstream issue on this problem opened with EasyBuild. See: https://github.com/easybuilders/easybuild-easyconfigs/issues/20233
Workarounds You can instruct OpenMPI to not use libfabric and turn off `uct`(see https://openucx.readthedocs.io/en/master/running.html#running-mpi) by passing the following options to `mpirun`:
mpirun -mca pml ucx -mca btl '^uct,ofi' -mca mtl '^ofi'\n
Or equivalently, you can set the following environment variables: export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
"},{"location":"meetings/2022-09-amsterdam/","title":"EESSI Community Meeting (Sept'22, Amsterdam)","text":""},{"location":"meetings/2022-09-amsterdam/#practical-info","title":"Practical info","text":" - dates: Wed-Fri 14-16 Sept'22
- in conjunction with CernVM workshop @ Nikhef (Mon-Tue 12-13 Sept'22)
- venue: \"Polderzaal\" at Cafe-Restaurant Polder (Google Maps), sponsored by SURF
- registration (closed since Fri 9 Sept'22)
- Slack channel:
community-meeting-2022
in EESSI Slack - YouTube playlist with recorded talks
"},{"location":"meetings/2022-09-amsterdam/#agenda","title":"Agenda","text":"(subject to changes)
We envision a mix of presentations, experience reports, demos, and hands-on sessions and/or hackathons related to the EESSI project.
If you would like to give a talk or host a session, please let us know via the EESSI Slack!
"},{"location":"meetings/2022-09-amsterdam/#wed-14-sept-2022","title":"Wed 14 Sept 2022","text":" - [10:00-13:00] Welcome session
- [10:00-10:30] Walk-in, coffee
- [10:30-12:00] Round table discussion (not live-streamed!)
- [12:00-13:00] Lunch
- [13:00-15:00] Presentations on EESSI
- [13:00-13:30] Introduction to EESSI (Caspar) [slides - recording]
- [13:30-14:00] Hands-on: how to use EESSI (Kenneth) [slides - recording]
- [14:00-14:30] EESSI use cases (Kenneth) [(slides - recording]
- [14:30-15:00] EESSI for sysadmins (Thomas) [slides - recording]
- [15:00-15:30] Coffee break
- [15:30-17:00] Presentations on EESSI (continued)
- [15:30-16:00] Hands-on: installing EESSI (Thomas/Kenneth)
- [16:00-16:45] ComputeCanada site talk (Bart Oldeman, remote) [slides - recording]
- [16:45-17:15] Magic Castle (Felix-Antoine Fortin, remote) [slides - recording]
- [19:00-...] Group dinner @ Saravanaa Bhavan (sponsored by Dell Technologies)
- address: Stadhouderskade 123-124, Amsterdam
"},{"location":"meetings/2022-09-amsterdam/#thu-15-sept-2022","title":"Thu 15 Sept 2022","text":" - [09:30-12:00] More focused presentations on aspects of EESSI
- [09:30-10:00] EESSI behind the scenes: compat layer (Bob) [slides - recording]
- [10:00-10:30] EESSI behind the scenes: software layer (Kenneth) [slides - recording]
- [10:30-11:00] Coffee break
- [11:00-11:30] EESSI behind the scenes: infrastructure (Terje) [slides - recording]
- [11:30-12:00] Status on RISC-V support (Kenneth) [slides - recording]
- [12:00-13:00] Lunch
- [13:00-14:00] Discussions/hands-on sessions/hackathon
- [14:00-14:30] Status on GPU support (Alan) [slides - recording]
- [14:30-15:00] Status on build-and-deploy bot (Thomas) [slides - recording]
- [15:00-15:30] Coffee break
- [15:30-17:00] Discussions/hands-on sessions/hackathon (continued)
- Hands-on with GPUs (Alan)
- Hands-on with bot (Thomas/Kenneth)
- [19:00-...] Group dinner @ Italia Oggi (sponsored by HPC-UGent)
- address: Binnen Bantammerstraat 11, Amsterdam
"},{"location":"meetings/2022-09-amsterdam/#fri-16-sept-2022","title":"Fri 16 Sept 2022","text":" - [09:30-12:00] Presentations on future work
- [09:30-10:00] Testing in software layer (Caspar) [slides - recording]
- [10:00-10:30] MultiXscale project (Alan) [slides - recording]
- [10:30-11:00] Coffee break
- [11:00-11:30] Short-term future work (Kenneth) [slides - recording]
- [11:30-12:00] Discussion: future management structure of EESSI (Alan) [slides - recording]
- [12:00-13:00] Lunch
- [13:00-14:00] Site reports [recording]
- NESSI (Thomas) [slides]
- NLPL (Stephan) [slides]
- HPCNow! (Danilo) [slides]
- Azure (Hugo) [slides]
- [14:00-14:30] Discussion: what would make or break EESSI for your site? (notes - recording)
- [14:30-15:45] Discussions/hands-on sessions/hackathon
- Hands-on with GPU support (Alan)
- Hands-on with bot (Thomas/Kenneth)
- Hands-on with software testing (Caspar)
- We need to leave the room by 16:00!
"},{"location":"repositories/pilot/","title":"Pilot","text":""},{"location":"repositories/pilot/#pilot-software-stack-202112","title":"Pilot software stack (2021.12)","text":""},{"location":"repositories/pilot/#caveats","title":"Caveats","text":"Danger
The EESSI pilot repository is no longer actively maintained, and should not be used for production work.
Please use the software.eessi.io
repository instead.
The current EESSI pilot software stack (version 2021.12) is the 7th iteration, and there are some known issues and limitations, please take these into account:
- First of all: the EESSI pilot software stack is NOT READY FOR PRODUCTION!
Do not use it for production work, and be careful when testing it on production systems!
"},{"location":"repositories/pilot/#reporting-problems","title":"Reporting problems","text":"If you notice any problems, please report them via https://github.com/EESSI/software-layer/issues.
"},{"location":"repositories/pilot/#accessing-the-eessi-pilot-repository-through-singularity","title":"Accessing the EESSI pilot repository through Singularity","text":"The easiest way to access the EESSI pilot repository is by using Singularity. If Singularity is installed already, no admin privileges are required. No other software is needed either on the host.
A container image is available in the GitHub Container Registry (see https://github.com/EESSI/filesystem-layer/pkgs/container/client-pilot). It only contains a minimal operating system + the necessary packages to access the EESSI pilot repository through CernVM-FS, and it is suitable for aarch64
, ppc64le
, and x86_64
.
The container image can be used directly by Singularity (no prior download required), as follows:
-
First, create some local directories in /tmp/$USER
which will be bind mounted in the container:
mkdir -p /tmp/$USER/{var-lib-cvmfs,var-run-cvmfs,home}\n
These provides space for the CernVM-FS cache, and an empty home directory to use in the container. -
Set the $SINGULARITY_BIND
and $SINGULARITY_HOME
environment variables to configure Singularity:
export SINGULARITY_BIND=\"/tmp/$USER/var-run-cvmfs:/var/run/cvmfs,/tmp/$USER/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"/tmp/$USER/home:/home/$USER\"\n
-
Start the container using singularity shell
, using --fusemount
to mount the EESSI pilot repository (using the cvmfs2
command that is included in the container image):
export EESSI_PILOT=\"container:cvmfs2 pilot.eessi-hpc.org /cvmfs/pilot.eessi-hpc.org\"\nsingularity shell --fusemount \"$EESSI_PILOT\" docker://ghcr.io/eessi/client-pilot:centos7\n
-
This should give you a shell in the container, where the EESSI pilot repository is mounted:
$ singularity shell --fusemount \"$EESSI_PILOT\" docker://ghcr.io/eessi/client-pilot:centos7\nINFO: Using cached SIF image\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nSingularity>\n
- It is possible that you see some scary looking warnings, but those can be ignored for now.
To verify that things are working, check the contents of the /cvmfs/pilot.eessi-hpc.org/versions/2021.12
directory:
Singularity> ls /cvmfs/pilot.eessi-hpc.org/versions/2021.12\ncompat init software\n
"},{"location":"repositories/pilot/#standard-installation","title":"Standard installation","text":"For those with privileges on their system, there are a number of example installation scripts for different architectures and operating systems available in the EESSI demo repository.
Here we prefer the Singularity approach as we can guarantee that the container image is up to date.
"},{"location":"repositories/pilot/#setting-up-the-eessi-environment","title":"Setting up the EESSI environment","text":"Once you have the EESSI pilot repository mounted, you can set up the environment by sourcing the provided init script:
source /cvmfs/pilot.eessi-hpc.org/versions/2021.12/init/bash\n
If all goes well, you should see output like this:
Found EESSI pilot repo @ /cvmfs/pilot.eessi-hpc.org/versions/2021.12!\nUsing x86_64/intel/haswell as software subdirectory.\nUsing /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI pilot software stack, have fun!\n[EESSI pilot 2021.12] $ \n
Now you're all set up! Go ahead and explore the software stack using \"module avail
\", and go wild with testing the available software installations!
"},{"location":"repositories/pilot/#testing-the-eessi-pilot-software-stack","title":"Testing the EESSI pilot software stack","text":"Please test the EESSI pilot software stack as you see fit: running simple commands, performing small calculations or running small benchmarks, etc.
Test scripts that have been verified to work correctly using the pilot software stack are available at https://github.com/EESSI/software-layer/tree/main/tests .
"},{"location":"repositories/pilot/#giving-feedback-or-reporting-problems","title":"Giving feedback or reporting problems","text":"Any feedback is welcome, and questions or problems reports are welcome as well, through one of the EESSI communication channels:
- (preferred!) EESSI
software-layer
GitHub repository: https://github.com/EESSI/software-layer/issues - EESSI mailing list (
eessi@list.rug.nl
) - EESSI Slack: https://eessi-hpc.slack.com (get an invite via https://www.eessi-hpc.org/join)
- monthly EESSI meetings (first Thursday of the month at 2pm CEST)
"},{"location":"repositories/pilot/#available-software","title":"Available software","text":"(last update: Mar 21st 2022)
EESSI currently supports the following HPC applications as well as all their dependencies:
- GROMACS (2020.1 and 2020.4)
- OpenFOAM (v2006 and 8)
- R (4.0.0) + R-bundle-Bioconductor (3.11) + RStudio Server (1.3.1093)
- TensorFlow (2.3.1) and Horovod (0.21.3)
- OSU-Micro-Benchmarks (5.6.3)
- ReFrame (3.9.1)
- Spark (3.1.1)
- IPython (7.15.0)
- QuantumESPRESSO (6.6) (currently not available on
ppc64le
) - WRF (3.9.1.1)
[EESSI pilot 2021.12] $ module --nx avail\n\n--------------------------- /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all ----------------------------\n ant/1.10.8-Java-11 LMDB/0.9.24-GCCcore-9.3.0\n Arrow/0.17.1-foss-2020a-Python-3.8.2 lz4/1.9.2-GCCcore-9.3.0\n Bazel/3.6.0-GCCcore-9.3.0 Mako/1.1.2-GCCcore-9.3.0\n Bison/3.5.3-GCCcore-9.3.0 MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n Boost/1.72.0-gompi-2020a matplotlib/3.2.1-foss-2020a-Python-3.8.2\n cairo/1.16.0-GCCcore-9.3.0 Mesa/20.0.2-GCCcore-9.3.0\n CGAL/4.14.3-gompi-2020a-Python-3.8.2 Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2\n CMake/3.16.4-GCCcore-9.3.0 METIS/5.1.0-GCCcore-9.3.0\n CMake/3.20.1-GCCcore-10.3.0 MPFR/4.0.2-GCCcore-9.3.0\n code-server/3.7.3 NASM/2.14.02-GCCcore-9.3.0\n DB/18.1.32-GCCcore-9.3.0 ncdf4/1.17-foss-2020a-R-4.0.0\n DB/18.1.40-GCCcore-10.3.0 netCDF-Fortran/4.5.2-gompi-2020a\n double-conversion/3.1.5-GCCcore-9.3.0 netCDF/4.7.4-gompi-2020a\n Doxygen/1.8.17-GCCcore-9.3.0 nettle/3.6-GCCcore-9.3.0\n EasyBuild/4.5.0 networkx/2.4-foss-2020a-Python-3.8.2\n EasyBuild/4.5.1 (D) Ninja/1.10.0-GCCcore-9.3.0\n Eigen/3.3.7-GCCcore-9.3.0 NLopt/2.6.1-GCCcore-9.3.0\n Eigen/3.3.9-GCCcore-10.3.0 NSPR/4.25-GCCcore-9.3.0\n ELPA/2019.11.001-foss-2020a NSS/3.51-GCCcore-9.3.0\n expat/2.2.9-GCCcore-9.3.0 nsync/1.24.0-GCCcore-9.3.0\n expat/2.2.9-GCCcore-10.3.0 numactl/2.0.13-GCCcore-9.3.0\n FFmpeg/4.2.2-GCCcore-9.3.0 numactl/2.0.14-GCCcore-10.3.0\n FFTW/3.3.8-gompi-2020a OpenBLAS/0.3.9-GCC-9.3.0\n FFTW/3.3.9-gompi-2021a OpenBLAS/0.3.15-GCC-10.3.0\n flatbuffers/1.12.0-GCCcore-9.3.0 OpenFOAM/v2006-foss-2020a\n FlexiBLAS/3.0.4-GCC-10.3.0 OpenFOAM/8-foss-2020a (D)\n fontconfig/2.13.92-GCCcore-9.3.0 OpenMPI/4.0.3-GCC-9.3.0\n foss/2020a OpenMPI/4.1.1-GCC-10.3.0\n foss/2021a OpenPGM/5.2.122-GCCcore-9.3.0\n freetype/2.10.1-GCCcore-9.3.0 OpenSSL/1.1 (D)\n FriBidi/1.0.9-GCCcore-9.3.0 OSU-Micro-Benchmarks/5.6.3-gompi-2020a\n GCC/9.3.0 Pango/1.44.7-GCCcore-9.3.0\n GCC/10.3.0 ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi\n GCCcore/9.3.0 PCRE/8.44-GCCcore-9.3.0\n GCCcore/10.3.0 PCRE2/10.34-GCCcore-9.3.0\n Ghostscript/9.52-GCCcore-9.3.0 Perl/5.30.2-GCCcore-9.3.0\n giflib/5.2.1-GCCcore-9.3.0 Perl/5.32.1-GCCcore-10.3.0\n git/2.23.0-GCCcore-9.3.0-nodocs pixman/0.38.4-GCCcore-9.3.0\n git/2.32.0-GCCcore-10.3.0-nodocs (D) pkg-config/0.29.2-GCCcore-9.3.0\n GLib/2.64.1-GCCcore-9.3.0 pkg-config/0.29.2-GCCcore-10.3.0\n GLPK/4.65-GCCcore-9.3.0 pkg-config/0.29.2 (D)\n GMP/6.2.0-GCCcore-9.3.0 pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2\n GMP/6.2.1-GCCcore-10.3.0 PMIx/3.1.5-GCCcore-9.3.0\n gnuplot/5.2.8-GCCcore-9.3.0 PMIx/3.2.3-GCCcore-10.3.0\n GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2 poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2\n gompi/2020a protobuf-python/3.13.0-foss-2020a-Python-3.8.2\n gompi/2021a protobuf/3.13.0-GCCcore-9.3.0\n groff/1.22.4-GCCcore-9.3.0 pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2\n groff/1.22.4-GCCcore-10.3.0 pybind11/2.6.2-GCCcore-10.3.0\n GROMACS/2020.1-foss-2020a-Python-3.8.2 Python/2.7.18-GCCcore-9.3.0\n GROMACS/2020.4-foss-2020a-Python-3.8.2 (D) Python/3.8.2-GCCcore-9.3.0\n GSL/2.6-GCC-9.3.0 Python/3.9.5-GCCcore-10.3.0-bare\n gzip/1.10-GCCcore-9.3.0 Python/3.9.5-GCCcore-10.3.0\n h5py/2.10.0-foss-2020a-Python-3.8.2 PyYAML/5.3-GCCcore-9.3.0\n HarfBuzz/2.6.4-GCCcore-9.3.0 Qt5/5.14.1-GCCcore-9.3.0\n HDF5/1.10.6-gompi-2020a QuantumESPRESSO/6.6-foss-2020a\n Horovod/0.21.3-foss-2020a-TensorFlow-2.3.1-Python-3.8.2 R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n hwloc/2.2.0-GCCcore-9.3.0 R/4.0.0-foss-2020a\n hwloc/2.4.1-GCCcore-10.3.0 re2c/1.3-GCCcore-9.3.0\n hypothesis/6.13.1-GCCcore-10.3.0 RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n ICU/66.1-GCCcore-9.3.0 Rust/1.52.1-GCCcore-10.3.0\n ImageMagick/7.0.10-1-GCCcore-9.3.0 ScaLAPACK/2.1.0-gompi-2020a\n IPython/7.15.0-foss-2020a-Python-3.8.2 ScaLAPACK/2.1.0-gompi-2021a-fb\n JasPer/2.0.14-GCCcore-9.3.0 scikit-build/0.10.0-foss-2020a-Python-3.8.2\n Java/11.0.2 (11) SciPy-bundle/2020.03-foss-2020a-Python-3.8.2\n jbigkit/2.1-GCCcore-9.3.0 SciPy-bundle/2021.05-foss-2021a\n JsonCpp/1.9.4-GCCcore-9.3.0 SCOTCH/6.0.9-gompi-2020a\n LAME/3.100-GCCcore-9.3.0 snappy/1.1.8-GCCcore-9.3.0\n libarchive/3.5.1-GCCcore-10.3.0 Spark/3.1.1-foss-2020a-Python-3.8.2\n libcerf/1.13-GCCcore-9.3.0 SQLite/3.31.1-GCCcore-9.3.0\n libdrm/2.4.100-GCCcore-9.3.0 SQLite/3.35.4-GCCcore-10.3.0\n libevent/2.1.11-GCCcore-9.3.0 SWIG/4.0.1-GCCcore-9.3.0\n libevent/2.1.12-GCCcore-10.3.0 Szip/2.1.1-GCCcore-9.3.0\n libfabric/1.11.0-GCCcore-9.3.0 Tcl/8.6.10-GCCcore-9.3.0\n libfabric/1.12.1-GCCcore-10.3.0 Tcl/8.6.11-GCCcore-10.3.0\n libffi/3.3-GCCcore-9.3.0 tcsh/6.22.02-GCCcore-9.3.0\n libffi/3.3-GCCcore-10.3.0 TensorFlow/2.3.1-foss-2020a-Python-3.8.2\n libgd/2.3.0-GCCcore-9.3.0 time/1.9-GCCcore-9.3.0\n libGLU/9.0.1-GCCcore-9.3.0 Tk/8.6.10-GCCcore-9.3.0\n libglvnd/1.2.0-GCCcore-9.3.0 Tkinter/3.8.2-GCCcore-9.3.0\n libiconv/1.16-GCCcore-9.3.0 UCX/1.8.0-GCCcore-9.3.0\n libjpeg-turbo/2.0.4-GCCcore-9.3.0 UCX/1.10.0-GCCcore-10.3.0\n libpciaccess/0.16-GCCcore-9.3.0 UDUNITS/2.2.26-foss-2020a\n libpciaccess/0.16-GCCcore-10.3.0 UnZip/6.0-GCCcore-9.3.0\n libpng/1.6.37-GCCcore-9.3.0 UnZip/6.0-GCCcore-10.3.0\n libsndfile/1.0.28-GCCcore-9.3.0 WRF/3.9.1.1-foss-2020a-dmpar\n libsodium/1.0.18-GCCcore-9.3.0 X11/20200222-GCCcore-9.3.0\n LibTIFF/4.1.0-GCCcore-9.3.0 x264/20191217-GCCcore-9.3.0\n libtirpc/1.2.6-GCCcore-9.3.0 x265/3.3-GCCcore-9.3.0\n libunwind/1.3.1-GCCcore-9.3.0 xorg-macros/1.19.2-GCCcore-9.3.0\n libxc/4.3.4-GCC-9.3.0 xorg-macros/1.19.3-GCCcore-10.3.0\n libxml2/2.9.10-GCCcore-9.3.0 Xvfb/1.20.9-GCCcore-9.3.0\n libxml2/2.9.10-GCCcore-10.3.0 Yasm/1.3.0-GCCcore-9.3.0\n libyaml/0.2.2-GCCcore-9.3.0 ZeroMQ/4.3.2-GCCcore-9.3.0\n LittleCMS/2.9-GCCcore-9.3.0 Zip/3.0-GCCcore-9.3.0\n LLVM/9.0.1-GCCcore-9.3.0 zstd/1.4.4-GCCcore-9.3.0\n
"},{"location":"repositories/pilot/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":""},{"location":"repositories/pilot/#x86_64","title":"x86_64","text":" - generic (currently implies
march=x86-64
and -mtune=generic
) - AMD
- zen2 (Rome)
- zen3 (Milan)
- Intel
- haswell
- skylake_avx512
"},{"location":"repositories/pilot/#aarch64arm64","title":"aarch64/arm64","text":" - generic (currently implies
-march=armv8-a
and -mtune=generic
) - AWS Graviton2
"},{"location":"repositories/pilot/#ppc64le","title":"ppc64le","text":" - generic
- power9le
"},{"location":"repositories/pilot/#easybuild-configuration","title":"EasyBuild configuration","text":"EasyBuild v4.5.1 was used to install the software in the 2021.12
version of the pilot repository. For some installations pull requests with changes that will be included in later EasyBuild versions were leveraged, see the build script that was used.
An example configuration of the build environment based on https://github.com/EESSI/software-layer can be seen here:
$ eb --show-config\n#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath (E) = /tmp/eessi-build/easybuild/build\ncontainerpath (E) = /tmp/eessi-build/easybuild/containers\ndebug (E) = True\nfilter-deps (E) = Autoconf, Automake, Autotools, binutils, bzip2, cURL, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars (E) = LD_LIBRARY_PATH\nhooks (E) = /home/eessi-build/software-layer/eb_hooks.py\nignore-osdeps (E) = True\ninstallpath (E) = /cvmfs/pilot.eessi-hpc.org/2021.06/software/linux/x86_64/intel/haswell\nmodule-extensions (E) = True\npackagepath (E) = /tmp/eessi-build/easybuild/packages\nprefix (E) = /tmp/eessi-build/easybuild\nrepositorypath (E) = /tmp/eessi-build/easybuild/ebfiles_repo\nrobot-paths (D) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/software/EasyBuild/4.5.1/easybuild/easyconfigs\nrpath (E) = True\nsourcepath (E) = /tmp/eessi-build/easybuild/sources:\nsysroot (E) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/compat/linux/x86_64\ntrace (E) = True\nzip-logs (E) = bzip2\n
"},{"location":"repositories/pilot/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for the pilot repository is shown at http://status.eessi.io/pilot/.
"},{"location":"repositories/riscv.eessi.io/","title":"EESSI RISC-V development repository (riscv.eessi.io
)","text":"This repository contains development versions of an EESSI RISC-V software stack. Note that versions may be added, modified, or deleted at any time.
"},{"location":"repositories/riscv.eessi.io/#accessing-the-risc-v-repository","title":"Accessing the RISC-V repository","text":"See Getting access; by making the EESSI CVMFS domain available, you will automatically have access to riscv.eessi.io
as well.
"},{"location":"repositories/riscv.eessi.io/#using-riscveessiio","title":"Using riscv.eessi.io
","text":"This repository currently offers one version (20240402), and this contains both a compatibility layer and a software layer. Furthermore, initialization scripts are in place to set up the repository:
$ source /cvmfs/riscv.eessi.io/versions/20240402/init/bash\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $\n
You can even source the initialization script of the software.eessi.io
production repository now, and it will automatically set up the RISC-V repository for you:
$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash \nRISC-V architecture detected, but there is no RISC-V support yet in the production repository.\nAutomatically switching to version 20240402 of the RISC-V development repository /cvmfs/riscv.eessi.io.\nFor more details about this repository, see https://www.eessi.io/docs/repositories/riscv.eessi.io/.\n\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all as the site extension directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $ \n
Note that we currently only provide generic builds, hence riscv64/generic
is being used for all RISC-V CPUs.
The amount of software is constantly increasing. Besides having the foss/2023b
toolchain available, applications like dlb, GROMACS, OSU Micro-Benchmarks, and R are already available as well. Use module avail
to get a full and up-to-date listing of available software.
"},{"location":"repositories/riscv.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for this repository is shown at https://status.eessi.io.
"},{"location":"repositories/software.eessi.io/","title":"Production EESSI repository (software.eessi.io
)","text":""},{"location":"repositories/software.eessi.io/#question-or-problems","title":"Question or problems","text":"If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request.
"},{"location":"repositories/software.eessi.io/#accessing-the-eessi-repository","title":"Accessing the EESSI repository","text":"See Getting access.
"},{"location":"repositories/software.eessi.io/#using-softwareeessiio","title":"Using software.eessi.io
","text":"See Using EESSI.
"},{"location":"repositories/software.eessi.io/#available-software","title":"Available software","text":"Detailed overview of available software coming soon!
For now, use module avail
after initializing the EESSI environment.
"},{"location":"repositories/software.eessi.io/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":"See CPU targets.
"},{"location":"repositories/software.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.
"},{"location":"software_layer/build_nodes/","title":"Build nodes","text":"Any system can be used as a build node to create additional software installations that should be added to the EESSI CernVM-FS repository.
"},{"location":"software_layer/build_nodes/#requirements","title":"Requirements","text":"OS and software:
- GNU/Linux (any distribution) as operating system;
- a recent version of Singularity (>= 3.6 is recommended);
- check with
singularity --version
screen
or tmux
is highly recommended;
Admin privileges are not required, as long as Singularity is installed.
Resources:
- 8 or more cores is recommended (though not strictly required);
- at least 50GB of free space on a local filesystem (like
/tmp
); - at least 16GB of memory (2GB/core or higher recommended);
Instructions to install Singularity and screen (click to show commands):
CentOS 8 (x86_64
or aarch64
or ppc64le
) sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\nsudo dnf update -y\nsudo dnf install -y screen singularity\n
"},{"location":"software_layer/build_nodes/#setting-up-the-container","title":"Setting up the container","text":"Warning
It is highly recommended to start a screen
or tmux
session first!
A container image is provided that includes everything that is required to set up a writable overlay on top of the EESSI CernVM-FS repository.
First, pick a location on a local filesystem for the temporary directory:
Requirements:
- Do not use a shared filesystem like NFS, Lustre or GPFS.
- There should be at least 50GB of free disk space in this local filesystem (more is better).
- There should be no automatic cleanup of old files via a cron job on this local filesystem.
- Try to make sure the directory is unique (not used by anything else).
NB. If you are going to install on a separate drive (due to lack of space on /), then you need to set some variables to point to that location. You will also need to bind mount it in the singularity
command. Let's say that you drive is mounted in /srt. Then you change the relevant commands below to this:
export EESSI_TMPDIR=/srt/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\nmkdir /srt/tmp\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs,/srt/tmp:/tmp\"\nsingularity shell -B /srt --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian11\n
We will assume that /tmp/$USER/EESSI
meets these requirements:
export EESSI_TMPDIR=/tmp/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\n
Create some subdirectories in this temporary directory:
mkdir -p $EESSI_TMPDIR/{home,overlay-upper,overlay-work}\nmkdir -p $EESSI_TMPDIR/{var-lib-cvmfs,var-run-cvmfs}\n
Configure Singularity cache directory, bind mounts, and (fake) home directory:
export SINGULARITY_CACHEDIR=$EESSI_TMPDIR/singularity_cache\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"$EESSI_TMPDIR/home:/home/$USER\"\n
Define values to pass to --fusemount` in
singularity`` command:
export EESSI_READONLY=\"container:cvmfs2 software.eessi.io /cvmfs_ro/software.eessi.io\"\nexport EESSI_WRITABLE_OVERLAY=\"container:fuse-overlayfs -o lowerdir=/cvmfs_ro/software.eessi.io -o upperdir=$EESSI_TMPDIR/overlay-upper -o workdir=$EESSI_TMPDIR/overlay-work /cvmfs/software.eessi.io\"\n
Start the container (which includes Debian 11, CernVM-FS and fuse-overlayfs):
singularity shell --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian10\n
Once the container image has been downloaded and converted to a Singularity image (SIF format), you should get a prompt like this:
...\nCernVM-FS: loading Fuse module... done\n\nSingularity>\n
and the EESSI CernVM-FS repository should be mounted:
Singularity> ls /cvmfs/software.eessi.io\nhost_injections README.eessi versions\n
"},{"location":"software_layer/build_nodes/#setting-up-the-environment","title":"Setting up the environment","text":"Set up the environment by starting a Gentoo Prefix session using the startprefix
command.
Make sure you use the correct version of the EESSI repository!
export EESSI_VERSION='2023.06' \n/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/compat/linux/$(uname -m)/startprefix\n
"},{"location":"software_layer/build_nodes/#installing-software","title":"Installing software","text":"Clone the software-layer repository:
git clone https://github.com/EESSI/software-layer.git\n
Run the software installation script in software-layer
:
cd software-layer\n./EESSI-install-software.sh\n
This script will figure out the CPU microarchitecture of the host automatically (like x86_64/intel/haswell
).
To build generic software installations (like x86_64/generic
), use the --generic
option:
./EESSI-install-software.sh --generic\n
Once all missing software has been installed, you should see a message like this:
No missing modules!\n
"},{"location":"software_layer/build_nodes/#creating-tarball-to-ingest","title":"Creating tarball to ingest","text":"Before tearing down the build node, you should create tarball to ingest into the EESSI CernVM-FS repository.
To create a tarball of all installations, assuming your build host is x86_64/intel/haswell
:
export EESSI_VERSION='2023.06'\ncd /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/software/linux\neessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell.tar.gz\"\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell\n
To create a tarball for specific installations, make sure you pick up both the software installation directories and the corresponding module files:
eessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell-OpenFOAM.tar.gz\"\n\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell/software/OpenFOAM modules/all//OpenFOAM\n
This tarball should be uploaded to the Stratum 0 server for ingestion. If needed, you can ask for help in the EESSI #software-layer
Slack channel
"},{"location":"software_layer/cpu_targets/","title":"CPU targets","text":"In the 2023.06 version of the EESSI repository, the following CPU microarchitectures are supported.
aarch64/generic
: fallback for Arm 64-bit CPUs (like Raspberri Pi, etc.) aarch64/neoverse_n1
: AWS Graviton 2, Ampere Altra, ... aarch64/neoverse_v1
: AWS Graviton 3 x86_64/generic
: fallback for older Intel + AMD CPUs (like Intel Sandy Bridge, ...) x86_64/amd/zen2
: AMD Rome x86_64/amd/zen3
: AMD Milan, AMD Milan X x86_64/intel/haswell
: Intel Haswell, Broadwell x86_64/intel/skylake_avx512
: Intel Skylake, Cascade Lake, Ice Lake, ...
The names of these CPU targets correspond to the names used by archspec.
"},{"location":"talks/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"AWS HPC Tech Short (~8 min.) - 15 June 2023
"},{"location":"talks/2023/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"AWS HPC Tech Short (~8 min.) - 15 June 2023
"},{"location":"talks/2023/20231027_packagingcon23_eessi/","title":"Streaming optimized scientific software installations on any Linux distro with EESSI","text":" - PackagingCon'2023 (Berlin, Germany) - 27 Oct 2023
- presented by Kenneth Hoste & Lara Peeters (HPC-UGent)
- slides (PDF)
"},{"location":"talks/2023/20231204_cvmfs_hpc/","title":"Best Practices for CernVM-FS in HPC","text":" - online tutorial (~3h15min), 4 Dec 2023
- presented by Kenneth Hoste (HPC-UGent)
- tutorial website: https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices
- slides (PDF)
"},{"location":"talks/2023/20231205_castiel2_eessi_intro/","title":"Streaming Optimised Scientific Software: an Introduction to EESSI","text":" - online tutorial (~1h40min) - 5 Dec 2023
- presented by Alan O'Cais (CECAM)
- slides (PDF)
"},{"location":"test-suite/","title":"EESSI test suite","text":"The EESSI test suite is a collection of tests that are run using ReFrame. It is used to check whether the software installations included in the EESSI software layer are working and performing as expected.
To get started, you should look into the installation and configuration guidelines first.
To write the ReFrame configuration file for your system, check ReFrame configuration file.
For which software tests are available, see available-tests.md.
For more information on using the EESSI test suite, see here.
See also release notes for the EESSI test suite.
"},{"location":"test-suite/ReFrame-configuration-file/","title":"ReFrame configuration file","text":"In order for ReFrame to run tests on your system, it needs to know some properties about your system. For example, it needs to know what kind of job scheduler you have, which partitions the system has, how to submit to those partitions, etc. All of this has to be described in a ReFrame configuration file (see also the section on $RFM_CONFIG_FILES
above).
This page is organized as follows:
- available ReFrame configuration file
- Verifying your ReFrame configuration
- How to write a ReFrame configuration file
"},{"location":"test-suite/ReFrame-configuration-file/#available-reframe-configuration-file","title":"Available ReFrame configuration file","text":"There are some available ReFrame configuration files for HPC systems and public cloud in the config directory for more inspiration. Below is a simple ReFrame configuration file with minimal changes required for getting you started on using the test suite for a CPU partition. Please check that stagedir
is set to a path on a (shared) scratch filesystem for storing (temporary) files related to the tests, and access
is set to a list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu' for submitting to a Slurm partition called cpu).
To write a ReFrame configuration file for your system, check the section How to write a ReFrame configuration file.
\"\"\"\nsimple ReFrame configuration file\n\"\"\"\nimport os\n\nfrom eessi.testsuite.common_config import common_logging_config, common_eessi_init, format_perfvars, perflog_format\nfrom eessi.testsuite.constants import * \n\nsite_configuration = {\n 'systems': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition',\n 'modules_system': 'lmod',\n 'hostnames': ['*'],\n # Note that the stagedir should be a shared directory available on all nodes running ReFrame tests\n 'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n 'partitions': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition',\n 'scheduler': 'slurm',\n 'launcher': 'mpirun',\n 'access': ['-p cpu', '--export=None'],\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'resources': [\n {\n 'name': 'memory',\n 'options': ['--mem={size}'],\n }\n ],\n 'features': [\n FEATURES[CPU]\n ] + list(SCALES.keys()),\n }\n ]\n },\n ],\n 'environments': [\n {\n 'name': 'default',\n 'cc': 'cc',\n 'cxx': '',\n 'ftn': '',\n },\n ],\n 'logging': common_logging_config(),\n 'general': [\n {\n # Enable automatic detection of CPU architecture for each partition\n # See https://reframe-hpc.readthedocs.io/en/stable/configure.html#auto-detecting-processor-information\n 'remote_detect': True,\n }\n ],\n}\n\n# optional logging to syslog\nsite_configuration['logging'][0]['handlers_perflog'].append({\n 'type': 'syslog',\n 'address': '/dev/log',\n 'level': 'info',\n 'format': f'reframe: {perflog_format}',\n 'format_perfvars': format_perfvars,\n 'append': True,\n})\n
"},{"location":"test-suite/ReFrame-configuration-file/#verifying-your-reframe-configuration","title":"Verifying your ReFrame configuration","text":"To verify the ReFrame configuration, you can query the configuration using --show-config
.
To see the full configuration, use:
reframe --show-config\n
To only show the configuration of a particular system partition, you can use the --system
option. To query a specific setting, you can pass an argument to --show-config
.
For example, to show the configuration of the gpu
partition of the example
system:
reframe --system example:gpu --show-config systems/0/partitions\n
You can drill it down further to only show the value of a particular configuration setting.
For example, to only show the launcher
value for the gpu
partition of the example
system:
reframe --system example:gpu --show-config systems/0/partitions/@gpu/launcher\n
"},{"location":"test-suite/ReFrame-configuration-file/#how-to-write-a-reframe-configuration-file","title":"How to write a ReFrame configuration file","text":"The official ReFrame documentation provides the full description on configuring ReFrame for your site. However, there are some configuration settings that are specifically required for the EESSI test suite. Also, there are a large amount of configuration settings available in ReFrame, which makes the official documentation potentially a bit overwhelming.
Here, we will describe how to create a configuration file that works with the EESSI test suite, starting from an example configuration file settings_example.py
, which defines the most common configuration settings.
"},{"location":"test-suite/ReFrame-configuration-file/#python-imports","title":"Python imports","text":"The EESSI test suite standardizes a few string-based values as constants, as well as the logging format used by ReFrame. Every ReFrame configuration file used for running the EESSI test suite should therefore start with the following import statements:
from eessi.testsuite.common_config import common_logging_config, common_eessi_init\nfrom eessi.testsuite.constants import *\n
"},{"location":"test-suite/ReFrame-configuration-file/#high-level-system-info-systems","title":"High-level system info (systems
)","text":"First, we describe the system at its highest level through the systems
keyword.
You can define multiple systems in a single configuration file (systems
is a Python list value). We recommend defining just a single system in each configuration file, as it makes the configuration file a bit easier to digest (for humans).
An example of the systems
section of the configuration file would be:
site_configuration = {\n 'systems': [\n # We could list multiple systems. Here, we just define one\n {\n 'name': 'example',\n 'descr': 'Example cluster',\n 'modules_system': 'lmod',\n 'hostnames': ['*'],\n 'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n 'partitions': [...],\n }\n ]\n}\n
The most common configuration items defined at this level are:
name
: The name of the system. Pick whatever makes sense for you. descr
: Description of the system. Again, pick whatever you like. modules_system
: The modules system used on your system. EESSI provides modules in lmod
format. There is no need to change this, unless you want to run tests from the EESSI test suite with non-EESSI modules. hostnames
: The names of the hosts on which you will run the ReFrame command, as regular expression. Using these names, ReFrame can automatically determine which of the listed configurations in the systems
list to use, which is useful if you're defining multiple systems in a single configuration file. If you follow our recommendation to limit yourself to one system per configuration file, simply define 'hostnames': ['*']
. prefix
: Prefix directory for a ReFrame run on this system. Any directories or files produced by ReFrame will use this prefix, if not specified otherwise. We recommend setting the $RFM_PREFIX
environment variable rather than specifying prefix
in your configuration file, so our common logging configuration can pick up on it (see also $RFM_PREFIX
). stagedir
: A shared directory that is available on all nodes that will execute ReFrame tests. This is used for storing (temporary) files related to the test. Typically, you want to set this to a path on a (shared) scratch filesystem. Defining this is optional: the default is a 'stage
' directory inside the prefix
directory. partitions
: Details on system partitions, see below.
"},{"location":"test-suite/ReFrame-configuration-file/#partitions","title":"System partitions (systems.partitions
)","text":"The next step is to add the system partitions to the configuration files, which is also specified as a Python list since a system can have multiple partitions.
The partitions
section of the configuration for a system with two Slurm partitions (one CPU partition, and one GPU partition) could for example look something like this:
site_configuration = {\n 'systems': [\n {\n ...\n 'partitions': [\n {\n 'name': 'cpu_partition',\n 'descr': 'CPU partition'\n 'scheduler': 'slurm',\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'launcher': 'mpirun',\n 'access': ['-p cpu'],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'features': [\n FEATURES[CPU]\n ] + list(SCALES.keys()),\n },\n {\n 'name': 'gpu_partition',\n 'descr': 'GPU partition'\n 'scheduler': 'slurm',\n 'prepare_cmds': ['source %s' % common_eessi_init()],\n 'launcher': 'mpirun',\n 'access': ['-p gpu'],\n 'environs': ['default'],\n 'max_jobs': 4,\n 'resources': [\n {\n 'name': '_rfm_gpu',\n 'options': ['--gpus-per-node={num_gpus_per_node}'],\n }\n ],\n 'devices': [\n {\n 'type': DEVICE_TYPES[GPU],\n 'num_devices': 4,\n }\n ],\n 'features': [\n FEATURES[CPU],\n FEATURES[GPU],\n ],\n 'extras': {\n GPU_VENDOR: GPU_VENDORS[NVIDIA],\n },\n },\n ]\n }\n ]\n}\n
The most common configuration items defined at this level are:
name
: The name of the partition. Pick anything you like. descr
: Description of the partition. Again, pick whatever you like. scheduler
: The scheduler used to submit to this partition, for example slurm
. All valid options can be found in the ReFrame documentation. launcher
: The parallel launcher used on this partition, for example mpirun
or srun
. All valid options can be found in the ReFrame documentation. access
: A list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu
' for submitting to a Slurm partition called cpu
). If supported by your scheduler, we recommend to not export the submission environment (for example by using '--export=None
' with Slurm). This avoids test failures due to environment variables set in the submission environment that are passed down to submitted jobs. prepare_cmds
: Commands to execute at the start of every job that runs a test. If your batch scheduler does not export the environment of the submit host, this is typically where you can initialize the EESSI environment. environs
: The names of the programming environments (to be defined later in the configuration file via environments
) that may be used on this partition. A programming environment is required for tests that are compiled first, before they can run. The EESSI test suite however only tests existing software installations, so no compilation (or specific programming environment) is needed. Simply specify 'environs': ['default']
, since ReFrame requires that a default environment is defined. max_jobs
: The maximum amount of jobs ReFrame is allowed to submit in parallel. Some batch systems limit how many jobs users are allowed to have in the queue. You can use this to make sure ReFrame doesn't exceed that limit. resources
: This field defines how additional resources can be requested in a batch job. Specifically, on a GPU partition, you have to define a resource with the name '_rfm_gpu
'. The options
field should then contain the argument to be passed to the batch scheduler in order to request a certain number of GPUs per node, which could be different for different batch schedulers. For example, when using Slurm you would specify: 'resources': [\n {\n 'name': '_rfm_gpu',\n 'options': ['--gpus-per-node={num_gpus_per_node}'],\n },\n],\n
processor
: We recommend to NOT define this field, unless CPU autodetection is not working for you. The EESSI test suite relies on information about your processor topology to run. Using CPU autodetection is the easiest way to ensure that all processor-related information needed by the EESSI test suite are defined. Only if CPU autodetection is failing for you do we advice you to set the processor
in the partition configuration as an alternative. Although additional fields might be used by future EESSI tests, at this point you'll have to specify at least the following fields: 'processor': {\n 'num_cpus': 64, # Total number of CPU cores in a node\n 'num_sockets': 2, # Number of sockets in a node\n 'num_cpus_per_socket': 32, # Number of CPU cores per socket\n 'num_cpus_per_core': 1, # Number of hardware threads per CPU core\n} \n
features
: The features
field is used by the EESSI test suite to run tests only on a partition if it supports a certain feature (for example if GPUs are available). Feature names are standardized in the EESSI test suite in eessi.testsuite.constants.FEATURES
dictionary. Typically, you want to define features: [FEATURES[CPU]] + list(SCALES.keys())
for CPU based partitions, and features: [FEATURES[GPU]] + list(SCALES.keys())
for GPU based partitions. The first tells the EESSI test suite that this partition can only run CPU-based tests, whereas second indicates that this partition can only run GPU-based tests. You can define a single partition to have both the CPU and GPU features (since features
is a Python list). However, since the CPU-based tests will not ask your batch scheduler for GPU resources, this may fail on batch systems that force you to ask for at least one GPU on GPU-based nodes. Also, running CPU-only code on a GPU node is typically considered bad practice, thus testing its functionality is typically not relevant. The list(SCALES.keys())
adds all the scales that may be used by EESSI tests to the features
list. These scales are defined in eessi.testsuite.constants.SCALES
and define at which scales tests should be run, e.g. single core, half a node, a full node, two nodes, etc. This can be used to exclude running at certain scales on systems that would not support it. E.g. some systems might not support requesting multiple partial nodes, which is what the 1_cpn_2_nodes
(1 core per node, on two nodes) and 1_cpn_4_nodes
scales do. One could exclude these by setting e.g. features: [FEATURES[CPU]] + [s for s in SCALES if s not in ['1_cpn_2_nodes', '1_cpn_4_nodes']]
. With this configuration setting, ReFrame will run all the scales listed in `eessi.testsuite.constants.SCALES except those two. In a similar way, one could exclude all multinode tests if one just has a single node available. devices
: This field specifies information on devices (for example) present in the partition. Device types are standardized in the EESSI test suite in the eessi.testsuite.constants.DEVICE_TYPES
dictionary. This is used by the EESSI test suite to determine how many of these devices it can/should use per node. Typically, there is no need to define devices
for CPU partitions. For GPU partitions, you want to define something like: 'devices': {\n 'type': DEVICE_TYPES[GPU],\n 'num_devices': 4, # or however many GPUs you have per node\n}\n
extras
: This field specifies extra information on the partition, such as the GPU vendor. Valid fields for extras
are standardized as constants in eessi.testsuite.constants
(for example GPU_VENDOR
). This is used by the EESSI test suite to decide if a partition can run a test that specifically requires a certain brand of GPU. Typically, there is no need to define extras
for CPU partitions. For GPU partitions, you typically want to specify the GPU vendor, for example: 'extras': {\n GPU_VENDOR: GPU_VENDORS[NVIDIA]\n}\n
Note that as more tests are added to the EESSI test suite, the use of features
, devices
and extras
by the EESSI test suite may be extended, which may require an update of your configuration file to define newly recognized fields.
Note
Keep in mind that ReFrame partitions are virtual entities: they may or may not correspond to a partition as it is configured in your batch system. One might for example have a single partition in the batch system, but configure it as two separate partitions in the ReFrame configuration file based on additional constraints that are passed to the scheduler, see for example the AWS CitC example configuration.
The EESSI test suite (and more generally, ReFrame) assumes the hardware within a partition defined in the ReFrame configuration file is homogeneous.
"},{"location":"test-suite/ReFrame-configuration-file/#environments","title":"Environments","text":"ReFrame needs a programming environment to be defined in its configuration file for tests that need to be compiled before they are run. While we don't have such tests in the EESSI test suite, ReFrame requires some programming environment to be defined:
site_configuration = {\n ...\n 'environments': [\n {\n 'name': 'default', # Note: needs to match whatever we set for 'environs' in the partition\n 'cc': 'cc',\n 'cxx': '',\n 'ftn': '',\n }\n ]\n}\n
Note
The name
here needs to match whatever we specified for the environs
property of the partitions.
"},{"location":"test-suite/ReFrame-configuration-file/#logging","title":"Logging","text":"ReFrame allows a large degree of control over what gets logged, and where. For convenience, we have created a common logging configuration in eessi.testsuite.common_config
that provides a reasonable default. It can be used by importing common_logging_config
and calling it as a function to define the 'logging
setting:
from eessi.testsuite.common_config import common_logging_config\n\nsite_configuration = {\n ...\n 'logging': common_logging_config(),\n}\n
When combined by setting the $RFM_PREFIX
environment variable, the output, performance log, and regular ReFrame logs will all end up in the directory specified by $RFM_PREFIX
, which we recommend doing. Alternatively, a prefix can be passed as an argument like common_logging_config(prefix)
, which will control where the regular ReFrame log ends up. Note that the performance logs do not respect this prefix: they will still end up in the standard ReFrame prefix (by default the current directory, unless otherwise set with $RFM_PREFIX
or --prefix
).
"},{"location":"test-suite/ReFrame-configuration-file/#cpu-auto-detection","title":"Auto-detection of processor information","text":"You can let ReFrame auto-detect the processor information for your system.
ReFrame will automatically use auto-detection when two conditions are met:
- The
partitions
section of you configuration file does not specify processor
information for a particular partition (as per our recommendation in the previous section); - The
remote_detect
option is enabled in the general
part of the configuration, as follows: site_configuration = {\n 'systems': ...\n 'logging': ...\n 'general': [\n {\n 'remote_detect': True,\n }\n ]\n}\n
To trigger the auto-detection of processor information, it is sufficient to let ReFrame list the available tests:
reframe --list\n
ReFrame will store the processor information for your system in ~/.reframe/topology/<system>-<partition>/processor.json
.
"},{"location":"test-suite/available-tests/","title":"Available tests","text":"The EESSI test suite currently includes tests for:
- GROMACS
- TensorFlow
- OSU Micro-Benchmarks
For a complete overview of all available tests in the EESSI test suite, see the eessi/testsuite/tests
subdirectory in the EESSI/test-suite
GitHub repository.
"},{"location":"test-suite/available-tests/#gromacs","title":"GROMACS","text":"Several tests for GROMACS, a software package to perform molecular dynamics simulations, are included, which use the systems included in the HECBioSim benchmark suite:
Crambin
(20K atom system) Glutamine-Binding-Protein
(61K atom system) hEGFRDimer
(465K atom system) hEGFRDimerSmallerPL
(465K atom system, only 10k steps) hEGFRDimerPair
(1.4M atom system) hEGFRtetramerPair
(3M atom system)
It is implemented in tests/apps/gromacs.py
, on top of the GROMACS test that is included in the ReFrame test library hpctestlib
.
To run this GROMACS test with all HECBioSim systems, use:
reframe --run --name GROMACS\n
To run this GROMACS test only for a specific HECBioSim system, use for example:
reframe --run --name 'GROMACS.*HECBioSim/hEGFRDimerPair'\n
To run this GROMACS test with the smallest HECBioSim system (Crambin
), you can use the CI
tag:
reframe --run --name GROMACS --tag CI\n
"},{"location":"test-suite/available-tests/#tensorflow","title":"TensorFlow","text":"A test for TensorFlow, a machine learning framework, is included, which is based on the \"Multi-worker training with Keras\" TensorFlow tutorial.
It is implemented in tests/apps/tensorflow/
.
To run this TensorFlow test, use:
reframe --run --name TensorFlow\n
Warning
This test requires TensorFlow v2.11 or newer, using an older TensorFlow version will not work!
"},{"location":"test-suite/available-tests/#osumicrobenchmarks","title":"OSU Micro-Benchmarks","text":"A test for OSU Micro-Benchmarks, which provides an MPI benchmark.
It is implemented in tests/apps/osu.py
.
To run this Osu Micro-Benchmark, use:
reframe --run --name OSU-Micro-Benchmarks\n
Warning
This test requires OSU Micro-Benchmarks v5.9 or newer, using an older OSU -Micro-Benchmark version will not work!
"},{"location":"test-suite/installation-configuration/","title":"Installing and configuring the EESSI test suite","text":"This page covers the requirements, installation and configuration of the EESSI test suite.
"},{"location":"test-suite/installation-configuration/#requirements","title":"Requirements","text":"The EESSI test suite requires
- Python >= 3.6
- ReFrame v4.3.3 (or newer)
- ReFrame test library (
hpctestlib
)
"},{"location":"test-suite/installation-configuration/#installing-reframe","title":"Installing Reframe","text":"General instructions for installing ReFrame are available in the ReFrame documentation. To check if ReFrame is available, run the reframe
command:
reframe --version\n
(for more details on the ReFrame version requirement, click here) Two important bugs were resolved in ReFrame's CPU autodetect functionality in version 4.3.3.
We strongly recommend you use ReFrame >= 4.3.3
.
If you are using an older version of ReFrame, you may encounter some issues:
- ReFrame will try to use the parallel launcher command configured for each partition (e.g.
mpirun
) when doing the remote autodetect. If there is no system-version of mpirun
available, that will fail (see ReFrame issue #2926). - CPU autodetection only worked when using a clone of the ReFrame repository, not when it was installed with
pip
or EasyBuild
(as is also the case for the ReFrame shipped with EESSI) (see ReFrame issue #2914).
"},{"location":"test-suite/installation-configuration/#installing-reframe-test-library-hpctestlib","title":"Installing ReFrame test library (hpctestlib
)","text":"The EESSI test suite requires that the ReFrame test library (hpctestlib
) is available, which is currently not included in a standard installation of ReFrame.
We recommend installing ReFrame using EasyBuild (version 4.8.1, or newer), or using a ReFrame installation that is available in the EESSI repository (version 2023.06, or newer).
For example (using EESSI):
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ReFrame/4.3.3\n
To check whether the ReFrame test library is available, try importing a submodule of the hpctestlib
Python package:
python3 -c 'import hpctestlib.sciapps.gromacs'\n
"},{"location":"test-suite/installation-configuration/#installation","title":"Installation","text":"To install the EESSI test suite, you can either use pip
or clone the GitHub repository directly:
"},{"location":"test-suite/installation-configuration/#pip-install","title":"Using pip
","text":"pip install git+https://github.com/EESSI/test-suite.git\n
"},{"location":"test-suite/installation-configuration/#cloning-the-repository","title":"Cloning the repository","text":"git clone https://github.com/EESSI/test-suite $HOME/EESSI-test-suite\ncd EESSI-test-suite\nexport PYTHONPATH=$PWD:$PYTHONPATH\n
"},{"location":"test-suite/installation-configuration/#verify-installation","title":"Verify installation","text":"To check whether the EESSI test suite installed correctly, try importing the eessi.testsuite
Python package:
python3 -c 'import eessi.testsuite'\n
"},{"location":"test-suite/installation-configuration/#configuration","title":"Configuration","text":"Before you can run the EESSI test suite, you need to create a configuration file for ReFrame that is specific to the system on which the tests will be run.
Example configuration files are available in the config
subdirectory of the EESSI/test-suite
GitHub repository](https://github.com/EESSI/test-suite/tree/main/config), which you can use as a template to create your own.
"},{"location":"test-suite/installation-configuration/#configuring-reframe-environment-variables","title":"Configuring ReFrame environment variables","text":"We recommend setting a couple of $RFM_*
environment variables to configure ReFrame, to avoid needing to include particular options to the reframe
command over and over again.
"},{"location":"test-suite/installation-configuration/#RFM_CONFIG_FILES","title":"ReFrame configuration file ($RFM_CONFIG_FILES
)","text":"(see also RFM_CONFIG_FILES
in ReFrame docs)
Define the $RFM_CONFIG_FILES
environment variable to instruct ReFrame which configuration file to use, for example:
export RFM_CONFIG_FILES=$HOME/EESSI-test-suite/config/example.py\n
Alternatively, you can use the --config-file
(or -C
) reframe
option.
See the section on the ReFrame configuration file below for more information.
"},{"location":"test-suite/installation-configuration/#search-path-for-tests-rfm_check_search_path","title":"Search path for tests ($RFM_CHECK_SEARCH_PATH
)","text":"(see also RFM_CHECK_SEARCH_PATH
in ReFrame docs)
Define the $RFM_CHECK_SEARCH_PATH
environment variable to tell ReFrame which directory to search for tests.
In addition, define $RFM_CHECK_SEARCH_RECURSIVE
to ensure that ReFrame searches $RFM_CHECK_SEARCH_PATH
recursively (i.e. so that also tests in subdirectories are found).
For example:
export RFM_CHECK_SEARCH_PATH=$HOME/EESSI-test-suite/eessi/testsuite/tests\nexport RFM_CHECK_SEARCH_RECURSIVE=1\n
Alternatively, you can use the --checkpath
(or -c
) and --recursive
(or -R
) reframe
options.
"},{"location":"test-suite/installation-configuration/#RFM_PREFIX","title":"ReFrame prefix ($RFM_PREFIX
)","text":"(see also RFM_PREFIX
in ReFrame docs)
Define the $RFM_PREFIX
environment variable to tell ReFrame where to store the files it produces. E.g.
export RFM_PREFIX=$HOME/reframe_runs\n
This involves:
- test output directories (which contain e.g. the job script, stderr and stdout for each of the test jobs)
- staging directories (unless otherwise specified by
staging
, see below); - performance logs;
Note that the default is for ReFrame to use the current directory as prefix. We recommend setting a prefix so that logs are not scattered around and nicely appended for each run.
If our common logging configuration is used, the regular ReFrame log file will also end up in the location specified by $RFM_PREFIX
.
Warning
Using the --prefix
option in your reframe
command is not equivalent to setting $RFM_PREFIX
, since our common logging configuration only picks up on the $RFM_PREFIX
environment variable to determine the location for the ReFrame log file.
"},{"location":"test-suite/release-notes/","title":"Release notes for EESSI test suite","text":""},{"location":"test-suite/release-notes/#020-7-march-2024","title":"0.2.0 (7 march 2024)","text":"This is a minor release of the EESSI test-suite
It includes:
- Implement the CI for regular runs on a system (#93)
- Add OSU tests and update the hooks and configs to make the tests portable (#54, #95, #96, #97, #110, #116, #117, #118, #121)
- Add extra scales to filter tests(#94)
- add new hook to filter out invalid scales based on features in the config (#111)
- unify test names (#108)
- updates to CI workflow ((#102, #103, #104, #105)
- Update common_config (#114)
- Add common config item to redirect the report file to the same directory as e.g. the perflog (#122)
- Fix code formatting + enforce it in CI workflow (#120)
Bug fixes:
- Fix hook _assign_num_tasks_per_node (#98)
- fix import common-config vsc_hortense (#99)
- fix typo in partition names in configuration file for vsc_hortense (#106)
"},{"location":"test-suite/release-notes/#010-5-october-2023","title":"0.1.0 (5 October 2023)","text":"Version 0.1.0 is the first release of the EESSI test suite.
It includes:
- A well-structured
eessi.testsuite
Python package that provides constants, utilities, hooks, and tests, which can be installed with \"pip install
\". - Tests for GROMACS and TensorFlow in
eessi.testsuite.tests.apps
that leverage the functionality provided by eessi.testsuite.*
. - Examples of ReFrame configuration files for various systems in the
config
subdirectory. - A
common_logging_config()
function to facilitate the ReFrame logging configuration. - A set of standard device types and features that can be used in the
partitions
section of the ReFrame configuration file. - A set of tags (
CI
+ scale
) that can be used to filter checks. - Scripts that show how to run the test suite.
"},{"location":"test-suite/usage/","title":"Using the EESSI test suite","text":"This page covers the usage of the EESSI test suite.
We assume you have already installed and configured the EESSI test suite on your system.
"},{"location":"test-suite/usage/#listing-available-tests","title":"Listing available tests","text":"To list the tests that are available in the EESSI test suite, use reframe --list
(or reframe -L
for short).
If you have properly configured ReFrame, you should see a (potentially long) list of checks in the output:
$ reframe --list\n...\n[List of matched checks]\n- ...\nFound 123 check(s)\n
Note
When using --list
, checks are only generated based on modules that are available in the system where the reframe
command is invoked.
The system partitions specified in your ReFrame configuration file are not taken into account when using --list
.
So, if --list
produces an overview of 50 checks, and you have 4 system partitions in your configuration file, actually running the test suite may result in (up to) 200 checks being executed.
"},{"location":"test-suite/usage/#dry-run","title":"Performing a dry run","text":"To perform a dry run of the EESSI test suite, use reframe --dry-run
:
$ reframe --dry-run\n...\n[==========] Running 1234 check(s)\n\n[----------] start processing checks\n[ DRY ] GROMACS_EESSI ...\n...\n[----------] all spawned checks have finished\n\n[ PASSED ] Ran 1234/1234 test case(s) from 1234 check(s) (0 failure(s), 0 skipped, 0 aborted)\n
Note
When using --dry-run
, the systems partitions listed in your ReFrame configuration file are also taken into account when generating checks, next to available modules and test parameters, which is not the case when using --list
.
"},{"location":"test-suite/usage/#running-the-full-test-suite","title":"Running the (full) test suite","text":"To actually run the (full) EESSI test suite and let ReFrame produce a performance report, use reframe --run --performance-report
.
We strongly recommend filtering the checks that will be run by using additional options like --system
, --name
, --tag
(see the 'Filtering tests' section below), and doing a dry run first to make sure that the generated checks correspond to what you have in mind.
"},{"location":"test-suite/usage/#reframe-output-and-log-files","title":"ReFrame output and log files","text":"ReFrame will generate various output and log files:
- a general ReFrame log file with debug logging on the ReFrame run (incl. selection of tests, generating checks, test results, etc.);
- stage directories for each generated check, in which the checks are run;
- output directories for each generated check, which include the test output;
- performance log files for each test, which include performance results for the test runs;
We strongly recommend controlling where these files go by using the common logging configuration that is provided by the EESSI test suite in your ReFrame configuration file and setting $RFM_PREFIX
(avoid using the cmd line option --prefix
).
If you do, and if you use ReFrame v4.3.3 or more newer, you should find the output and log files at:
- general ReFrame log file at
$RFM_PREFIX/logs/reframe_<datestamp>_<timestamp>.log
; - stage directories in
$RFM_PREFIX/stage/<system>/<partition>/<environment>/
; - output directories in
$RFM_PREFIX/output/<system>/<partition>/<environment>/
; - performance log files in
$RFM_PREFIX/perflogs/<system>/<partition>/<environment>/
;
In the stage and output directories, there will be a subdirectory for each check that was run, which are tagged with a unique hash (like d3adb33f
) that is determined based on the specific parameters for that check (see the ReFrame documentation for more details on the test naming scheme).
"},{"location":"test-suite/usage/#filtering-tests","title":"Filtering tests","text":"By default, ReFrame will automatically generate checks for each system partition, based on the tests available in the EESSI test suite, available software modules, and tags defined in the EESSI test suite.
To avoid being overwhelmed by checks, it is recommend to apply filters so ReFrame only generates the checks you are interested in.
"},{"location":"test-suite/usage/#filter-name","title":"Filtering by test name","text":"You can filter checks based on the full test name using the --name
option (or -n
), which includes the value for all test parameters.
Here's an example of a full test name:
GROMACS_EESSI %benchmark_info=HECBioSim/Crambin %nb_impl=cpu %scale=1_node %module_name=GROMACS/2023.1-foss-2022a /d3adb33f @example:gpu+default\n
To let ReFrame only generate checks for GROMACS, you can use:
reframe --name GROMACS\n
To only run GROMACS checks with a particular version of GROMACS, you can use --name
to only retain specific GROMACS
modules:
reframe --name %module_name=GROMACS/2023.1\n
Likewise, you can filter on any part of the test name.
You can also select one specific check using the corresponding test hash, which is also part of the full test name (see /d3adb33f
in the example above): for example:
reframe --name /d3adb33f\n
The argument passed to --name
is interpreted as a Python regular expression, so you can use wildcards like .*
, character ranges like [0-9]
, use ^
to specify that the pattern should match from the start of the test name, etc.
Use --list
or --dry-run
to check the impact of using the --name
option.
"},{"location":"test-suite/usage/#filter-system-partition","title":"Filtering by system (partition)","text":"By default, ReFrame will generate checks for each system partition that is listed in your configuration file.
To only let ReFrame checks for a particular system or system partition, you can use the --system
option.
For example:
- To let ReFrame only generate checks for the system named
example
, use: reframe --system example ...\n
- To let ReFrame only generate checks for the
gpu
partition of the system named example
, use: reframe --system example:gpu ...\n
Use --dry-run
to check the impact of using the --system
option.
"},{"location":"test-suite/usage/#filter-tag","title":"Filtering by tags","text":"To filter tests using one or more tags, you can use the --tag
option.
Using --list-tags
you can get a list of known tags.
To check the impact of this on generated checks by ReFrame, use --list
or --dry-run
.
"},{"location":"test-suite/usage/#ci-tag","title":"CI
tag","text":"For each software that is included in the EESSI test suite, a small test is tagged with CI
to indicate it can be used in a Continuous Integration (CI) environment.
Hence, you can use this tag to let ReFrame only generate checks for small test cases:
reframe --tag CI\n
For example:
$ reframe --name GROMACS --tag CI\n...\n
"},{"location":"test-suite/usage/#scale-tags","title":"scale
tags","text":"The EESSI test suite defines a set of custom tags that control the scale of checks, which specify many cores/GPUs/nodes should be used for running a check. The number of cores and GPUs serves as an upper limit; the actual count depends on the specific configuration of cores, GPUs, and sockets within the node, as well as the specific test being carried out.
tag name description 1_core
using 1 CPU core 1 GPU 2_cores
using 2 CPU cores and 1 GPU 4_cores
using 4 CPU cores and 1 GPU 1_cpn_2_nodes
using 1 CPU core per node, 1 GPU per node, and 2 nodes 1_cpn_4_nodes
using 1 CPU core per node, 1 GPU per node, and 4 nodes 1_8_node
using 1/8th of a node (12.5% of available cores/GPUs, 1 at minimum) 1_4_node
using a quarter of a node (25% of available cores/GPUs, 1 at minimum) 1_2_node
using half of a node (50% of available cores/GPUs, 1 at minimum) 1_node
using a full node (all available cores/GPUs) 2_nodes
using 2 full nodes 4_nodes
using 4 full nodes 8_nodes
using 8 full nodes 16_nodes
using 16 full nodes"},{"location":"test-suite/usage/#using-multiple-tags","title":"Using multiple tags","text":"To filter tests using multiple tags, you can:
- use
|
as separator to indicate that one of the specified tags must match (logical OR, for example --tag='1_core|2_cores'
); - use the
--tag
option multiple times to indicate that all specified tags must match (logical AND, for example --tag CI --tag 1_core
);
"},{"location":"test-suite/usage/#example-commands","title":"Example commands","text":"Running all GROMACS tests on 4 cores on the cpu
partition
reframe --run --system example:cpu --name GROMACS --tag 4_cores --performance-report\n
List all checks for TensorFlow 2.11 using a single node
reframe --list --name %module_name=TensorFlow/2.11 --tag 1_node\n
Dry run of TensorFlow CI checks on a quarter (1/4) of a node (on all system partitions)
reframe --dry-run --name 'TensorFlow.*CUDA' --tag 1_4_node --tag CI\n
"},{"location":"test-suite/usage/#overriding-test-parameters-advanced","title":"Overriding test parameters (advanced)","text":"You can override test parameters using the --setvar
option (or -S
).
This can be done either globally (for all tests), or only for specific tests (which is recommended when using --setvar
).
For example, to run all GROMACS checks with a specific GROMACS module, you can use:
reframe --setvar GROMACS_EESSI.modules=GROMACS/2023.1-foss-2022a ...\n
Warning
We do not recommend using --setvar
, since it is quite easy to make unintended changes to test parameters this way that can result in broken checks.
You should try filtering tests using the --name
or --tag
options instead.
"},{"location":"using_eessi/basic_commands/","title":"Basic commands","text":""},{"location":"using_eessi/basic_commands/#basic-commands-to-access-software-provided-via-eessi","title":"Basic commands to access software provided via EESSI","text":"EESSI provides software through environment module files and Lmod.
To see which modules (and extensions) are available, run:
module avail\n
Below is a short excerpt of the output produced by module avail
, showing 10 modules only.
PyYAML/5.3-GCCcore-9.3.0\n Qt5/5.14.1-GCCcore-9.3.0\n Qt5/5.15.2-GCCcore-10.3.0 (D)\n QuantumESPRESSO/6.6-foss-2020a\n R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n R/4.0.0-foss-2020a\n R/4.1.0-foss-2021a (D)\n re2c/1.3-GCCcore-9.3.0\n re2c/2.1.1-GCCcore-10.3.0 (D)\n RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n
Load modules with module load package/version
, e.g., module load R/4.1.0-foss-2021a
, and try out the software. See below for a short session
[EESSI 2023.06] $ module load R/4.1.0-foss-2021a\n[EESSI 2021.06] $ which R\n/cvmfs/software.eessi.io/versions/2021.12/software/linux/x86_64/intel/skylake_avx512/software/R/4.1.0-foss-2021a/bin/R\n[EESSI 2023.06] $ R --version\nR version 4.1.0 (2021-05-18) -- \"Camp Pontanezen\"\nCopyright (C) 2021 The R Foundation for Statistical Computing\nPlatform: x86_64-pc-linux-gnu (64-bit)\n\nR is free software and comes with ABSOLUTELY NO WARRANTY.\nYou are welcome to redistribute it under the terms of the\nGNU General Public License versions 2 or 3.\nFor more information about these matters see\nhttps://www.gnu.org/licenses/.\n
"},{"location":"using_eessi/building_on_eessi/","title":"Building software on top of EESSI","text":""},{"location":"using_eessi/building_on_eessi/#building-software-on-top-of-eessi-with-easybuild","title":"Building software on top of EESSI with EasyBuild","text":"Building on top of EESSI with EasyBuild is relatively straightforward. One crucial feature is that EasyBuild supports building against operating system libraries that are not in a standard prefix (such as /usr/lib
). This is required when building against EESSI, since all of the software in EESSI is built against the compatibility layer.
"},{"location":"using_eessi/building_on_eessi/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"Start your environment as described here
"},{"location":"using_eessi/building_on_eessi/#configure-easybuild","title":"Configure EasyBuild","text":"To configure EasyBuild, first, check out the EESSI software-layer repository. We advise you to check out the branch corresponding to the version of EESSI you would like to use.
If you are unsure which version you are using, you can run
echo ${EESSI_VERSION}\n
to check it. To build on top of e.g. version 2023.06
of the EESSI software stack, we check it out, and go into that directory:
git clone https://github.com/EESSI/software-layer/ --branch 2023.06\ncd software-layer\n
Then, you have to pick a working directory (that you have write access to) where EasyBuild can do the build, and an install directory (with sufficient storage space), where EasyBuild can install it. In this example, we create a temporary directory in /tmp/
as our working directory, and use $HOME/.local/easybuild
as our installpath: export WORKDIR=$(mktemp --directory --tmpdir=/tmp -t eessi-build.XXXXXXXXXX)\nsource configure_easybuild\nexport EASYBUILD_INSTALLPATH=\"${HOME}/.local/easybuild\"\n
Next, you load the EasyBuild module that you want to use, e.g. module load EasyBuild/4.8.2\n
Finally, you can check the current configuration for EasyBuild using eb --show-config\n
Note
We use EasyBuild's default behaviour in optimizing for the host architecture. Since the EESSI initialization script also loads the EESSI stack that is optimized for your host architecture, this matches nicely. However, if you work on a cluster with heterogeneous node types, you have to realize you can only use these builds on the same architecture as where you build them. You can use different EASYBUILD_INSTALLPATH
s if you want to build for different host architectures. For example, when you are on a system that has a mix of AMD zen3
and AMD zen4
nodes, you might want to use EASYBUILD_INSTALLPATH=$HOME/.local/easybuild/zen3
when building on a zen3
node, EASYBUILD_INSTALLPATH=$HOME/.local/easybuild/zen4
when building on a zen4
node. Then, in the step beloww, instead of the module use
command listed there, you can use module use $HOME/.local/easybuild/zen3/modules/all
when you want to run on a zen3
node and module use $HOME/.local/easybuild/zen4/modules/all
when you want to run on a zen4
node.
"},{"location":"using_eessi/building_on_eessi/#building","title":"Building","text":"Now, you are ready to build. For example, at the time of writing, netCDF-4.9.0-gompi-2022a.eb
was not in the EESSI environment yet, so you can build it yourself:
eb netCDF-4.9.0-gompi-2022a.eb\n
Note
If this netCDF module is available by the time you are trying, you can force a local rebuild by adding the --rebuild
argument in order to experiment with building locally, or pick a different EasyConfig to build.
"},{"location":"using_eessi/building_on_eessi/#using-the-newly-built-module","title":"Using the newly built module","text":"First, you'll need to add the subdirectory of the EASYBUILD_INSTALLPATH
that contains the modules to the MODULEPATH
. You can do that using:
module use ${EASYBUILD_INSTALLPATH}/modules/all\n
you may want to do this as part of your .bashrc
.
Note
Be careful adding to the MODULEPATH
in your .bashrc
if you are on a cluster with heterogeneous architectures. You don't want to pick up on a module that was not compiled for the correct architectures accidentally.
Since your module is built on top of the EESSI environment, that needs to be loaded first (as described here), if you haven't already done so.
Finally, you should be able to load our newly build module:
module load netCDF/4.9.0-gompi-2022a\n
"},{"location":"using_eessi/building_on_eessi/#manually-building-software-op-top-of-eessi","title":"Manually building software op top of EESSI","text":"Building software on top of EESSI would require your linker to use the same system-dependencies as the software in EESSI does. In other words: it requires you to link against libraries from the compatibility layer, instead of from your host OS.
While we plan to support this in the future, manually building on top of EESSI is currently not supported yet in a trivial way.
"},{"location":"using_eessi/eessi_demos/","title":"Running EESSI demos","text":"To really experience how using EESSI can significantly facilitate the work of researchers, we recommend running one or more of the EESSI demos.
First, clone the eessi-demo
Git repository, and move into the resulting directory:
git clone https://github.com/EESSI/eessi-demo.git\ncd eessi-demo\n
The contents of the directory should be something like this:
$ ls -l\ntotal 48\ndrwxrwxr-x 2 example users 4096 May 15 13:26 Bioconductor\ndrwxrwxr-x 2 example users 4096 May 15 13:26 ESPResSo\ndrwxrwxr-x 2 example users 4096 May 15 13:26 GROMACS\n-rw-rw-r-- 1 example users 18092 Dec 5 2022 LICENSE\ndrwxrwxr-x 2 example users 4096 May 15 13:26 OpenFOAM\n-rw-rw-r-- 1 example users 543 May 15 13:26 README.md\ndrwxrwxr-x 3 example users 4096 May 15 13:26 scripts\ndrwxrwxr-x 2 example users 4096 May 15 13:26 TensorFlow\n
The directories we care about are those that correspond to particular scientific software, like Bioconductor
, GROMACS
, OpenFOAM
, TensorFlow
, ...
Each of these contains a run.sh
script that can be used to start a small example run with that software. Every example takes a couple of minutes to run, even with limited resources only.
"},{"location":"using_eessi/eessi_demos/#example-running-tensorflow","title":"Example: running TensorFlow","text":"Let's try running the TensorFlow example.
First, we need to make sure that our environment is set up to use EESSI:
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n
Change to the TensorFlow
subdirectory of the eessi-demo
Git repository, and execute the run.sh
script:
[EESSI 2023.06] $ cd TensorFlow\n[EESSI 2023.06] $ ./run.sh\n
Shortly after starting the script you should see output as shown below, which indicates that GROMACS has started running:
Epoch 1/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.2983 - accuracy: 0.9140\nEpoch 2/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.1444 - accuracy: 0.9563\nEpoch 3/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.1078 - accuracy: 0.9670\nEpoch 4/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.0890 - accuracy: 0.9717\nEpoch 5/5\n 1875/1875 [==============================] - 3s 1ms/step - loss: 0.0732 - accuracy: 0.9772\n313/313 - 0s - loss: 0.0679 - accuracy: 0.9790 - 391ms/epoch - 1ms/step\n\nreal 1m24.645s\nuser 0m16.467s\nsys 0m0.910s\n
"},{"location":"using_eessi/setting_up_environment/","title":"Setting up your environment","text":"To set up the EESSI environment, simply run the command:
source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n
This may take a while as data is downloaded from a Stratum 1 server which is part of the CernVM-FS infrastructure to distribute files. You should see the following output:
Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\narchdetect says x86_64/amd/zen2\nUsing x86_64/amd/zen2 as software subdirectory.\nUsing /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} [user@system ~]$ # (2)!\n
- What is reported here depends on the CPU architecture of the machine you are running the
source
command. - This is the prompt indicating that you have access to the EESSI software stack.
The last line is the shell prompt.
Your environment is now set up, you are ready to start running software provided by EESSI!
"},{"location":"blog/archive/2024/","title":"2024","text":""}]}
\ No newline at end of file