From 1c734d938fefc0c3189943fd6578ca7dfacc9420 Mon Sep 17 00:00:00 2001 From: <> Date: Thu, 12 Dec 2024 15:31:08 +0000 Subject: [PATCH] Deployed f300187b1 with MkDocs version: 1.6.1 --- search/search_index.json | 2 +- using_eessi/building_on_eessi/index.html | 6 +++++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/search/search_index.json b/search/search_index.json index e1ab46e69..76e556a97 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to the EESSI project documentation!","text":"

Quote

What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?

The European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is a collaboration between different European partners in HPC community. The goal of this project is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure.

"},{"location":"#quick-links","title":"Quick links","text":"

For users:

For system administrators:

For contributors:

The EESSI project was covered during a quick AWS HPC Tech Short video (15 June 2023):

"},{"location":"bot/","title":"Build-test-deploy bot","text":"

Building, testing, and deploying software is done by one or more bot instances.

The EESSI build-test-deploy bot is implemented as a GitHub App in the eessi-bot-software-layer repository.

It operates in the context of pull requests to the compatibility-layer repository or the software-layer repository, and follows the instructions supplied by humans, so the procedure of adding software to EESSI is semi-automatic.

It leverages the scripts provided in the bot/ subdirectory of the target repository (see for example here), like bot/build.sh to build software, and bot/check-result.sh to check whether the software was built correctly.

"},{"location":"bot/#high-level-design","title":"High-level design","text":"

The bot consists of two components: the event handler, and the job manager.

"},{"location":"bot/#event-handler","title":"Event handler","text":"

The bot event handler is responsible for handling GitHub events for the GitHub repositories it is registered to.

It is triggered for every event that it receives from GitHub. Most events are ignored, but specific events trigger the bot to take action.

Examples of actionable events are submitting of a comment that starts with bot:, which may specify an instruction for the bot like building software, or adding a bot:deploy label (see deploying).

"},{"location":"bot/#job-manager","title":"Job manager","text":"

The bot job manager is responsible for monitoring the queued and running jobs, and reporting back when jobs completed.

It runs every couple of minutes as a cron job.

"},{"location":"bot/#basics","title":"Basics","text":"

Instructions for the bot should always start with bot:.

To get help from the bot, post a comment with bot: help.

To make the bot report how it is configured, post a comment with bot: show_config.

"},{"location":"bot/#permissions","title":"Permissions","text":"

The bot is configured to only act on instructions issued by specific GitHub accounts.

There are separate configuration options for allowing to send instructions to the bot, to trigger building of software, and to deploy software installations in to the EESSI repository.

Note

Ask for help in the #software-layer-bot channel of the EESSI Slack if needed!

"},{"location":"bot/#building","title":"Building","text":"

To instruct the bot to build software, one or more build instructions should be issued by posting a comment in the pull request (see also here).

The most basic build instruction that can be sent to the bot is:

bot: build\n

Warning

Only use bot: build if you are confident that it is OK to do so.

Most likely, you want to supply one or more filters to avoid that the bot builds for all its configurations.

"},{"location":"bot/#filters","title":"Filters","text":"

Build instructions can include filters that are applied by each bot instance to determine which builds should be executed, based on:

Note

Use : as separator to specify a value for a particular filter, do not add spaces after the :.

The bot recognizes shorthands for the supported filters, so you can use inst:... instead of instance:..., repo:... instead of repository:..., and arch:... instead of architecture:....

"},{"location":"bot/#combining-filters","title":"Combining filters","text":"

You can combine multiple filters in a single build instruction. Separate filters with a space, order of filters does not matter.

For example:

bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen2\n
"},{"location":"bot/#multiple-build-instructions","title":"Multiple build instructions","text":"

You can issue multiple build instructions in a single comment, even across multiple bot instances, repositories, and CPU targets. Specify one build instruction per line.

For example:

bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen3 inst:aws\nbot: build repo:eessi-hpc.org-2023.06-software arch:aarch64/generic inst:azure\n

Note

The bot applies the filters with partial matching, which you can use to combine multiple build instructions into a single one.

For example, if you only want to build for all aarch64 CPU targets, you can use arch:aarch64 as filter.

The same applies to the instance and repository filters.

"},{"location":"bot/#behind-the-scenes","title":"Behind-the-scenes","text":""},{"location":"bot/#processing-build-instructions","title":"Processing build instructions","text":"

When the bot receives build instructions through a comment in a pull request, they are processed by the event handler component. It will:

1) Combine its active configuration (instance name, repositories, supported CPU targets) and the build instructions to prepare a list of jobs to submit;

2) Create a working directory for each job, including a Slurm job script that runs the bot/build.sh script in the context of the changes proposed in the pull request to build the software, and runs bot/check-result.sh script at the end to check whether the build was successful;

3) Submit each prepared job to a workernode that can build for the specified CPU target, and put a hold on it.

"},{"location":"bot/#managing-build-jobs","title":"Managing build jobs","text":"

During the next iteration of the job manager, the submitted jobs are released and queued for execution.

The job manager also monitors the running jobs at regular intervals, and reports back in the pull request when a job has completed. It also reports the result (SUCCESS or FAILURE ), based on the result of the bot/check-result.sh script.

"},{"location":"bot/#artefacts","title":"Artefacts","text":"

If all goes well, each job should produce a tarball as an artefact, which contains the software installations and the corresponding environment module files.

The message reported by the job manager provides an overview of the contents of the artefact, which was created by the bot/check-result.sh script.

"},{"location":"bot/#testing","title":"Testing","text":"

Warning

The test phase is not implemented yet in the bot.

We intend to use the EESSI test suite in different OS configurations to verify that the software that was built works as expected.

"},{"location":"bot/#deploying","title":"Deploying","text":"

To deploy the artefacts that were obtained in the build phase, you should add the bot: deploy label to the pull request.

This will trigger the event handler to upload the artefacts for ingestion into the EESSI repository.

"},{"location":"bot/#behind-the-scenes_1","title":"Behind-the-scenes","text":"

The current setup for the software-layer repository, is as follows:

"},{"location":"compatibility_layer/","title":"Compatibility layer","text":"

The middle layer of the EESSI project is the compatibility layer, which ensures that our scientific software stack is compatible with different client operating systems (different Linux distributions, macOS and even Windows via WSL).

For this we rely on Gentoo Prefix, by installing a limited set of Gentoo Linux packages in a non-standard location (a \"prefix\"), using Gentoo's package manager Portage.

The compatible layer is maintained via our https://github.com/EESSI/compatibility-layer GitHub repository.

"},{"location":"contact/","title":"Contact info","text":"

For more information:

"},{"location":"filesystem_layer/","title":"Filesystem layer","text":""},{"location":"filesystem_layer/#cernvm-file-system-cernvm-fs","title":"CernVM File System (CernVM-FS)","text":"

The bottom layer of the EESSI project is the filesystem layer, which is responsible for distributing the software stack.

For this we rely on CernVM-FS (or CVMFS for short), a network file system used to distribute the software to the clients in a fast, reliable and scalable way.

CVMFS was created over 10 years ago specifically for the purpose of globally distributing a large software stack. For the experiments at the Large Hadron Collider, it hosts several hundred million files and directories that are distributed to the order of hundred thousand client computers.

The hierarchical structure with multiple caching layers (Stratum-0, Stratum-1's located at partner sites and local caching proxies) ensures good performance with limited resources. Redundancy is provided by using multiple Stratum-1's at various sites. Since CVMFS is based on the HTTP protocol, the ubiquitous Squid caching proxy can be leveraged to reduce server loads and improve performance at large installations (such as HPC clusters). Clients can easily mount the file system (read-only) via a FUSE (Filesystem in Userspace) module.

For a (basic) introduction to CernVM-FS, see this presentation.

Detailed information about how we configure CVMFS is available at https://github.com/EESSI/filesystem-layer.

"},{"location":"filesystem_layer/#eessi-infrastructure","title":"EESSI infrastructure","text":"

For both the pilot and production repositories, EESSI hosts a CernVM-FS Stratum 0 and a number of public Stratum 1 servers. Client systems using EESSI by default connect against the public EESSI CernVM-FS Stratum 1 servers. The status of the infrastructure for the pilot repository is displayed at http://status.eessi-infra.org, while for the production repository it is displayed at https://status.eessi.io.

"},{"location":"governance/","title":"EESSI Governance","text":"

EESSI recognises that formal governance is essential given the ambitions of the project, not just for EESSI itself but also to those who would adopt EESSI and/or fund its development.

EESSI is, therefore, in the process of adopting a formal governance model. To facilitate this process it has created an Interim Steering Committee whose role is to progress this adoption while also providing direction to the project.

"},{"location":"governance/#members-of-the-interim-steering-committee","title":"Members of the Interim Steering Committee","text":"

The members of the Interim Steering Committee are listed below. Each member of the Interim Steering Committee also nominate an alternate should they not be able to attend a meeting of the committee.

"},{"location":"meetings/","title":"Meetings","text":""},{"location":"meetings/#monthly-meetings-online","title":"Monthly meetings (online)","text":"

Online EESSI update meeting, every 1st Thursday of the month at 14:00 CE(S)T.

More info can be found on the EESSI wiki.

"},{"location":"meetings/#physical-meetings","title":"Physical meetings","text":""},{"location":"meetings/#physical-meetings-archive","title":"Physical meetings (archive)","text":""},{"location":"meetings/#2020","title":"2020","text":""},{"location":"meetings/#2019","title":"2019","text":""},{"location":"overview/","title":"Overview of the EESSI project","text":""},{"location":"overview/#scope-goals","title":"Scope & Goals","text":"

Through the EESSI project, we want to set up a shared stack of scientific software installations, and by doing so avoid a lot of duplicate work across HPC sites.

For end users, we want to provide a uniform user experience with respect to available scientific software, regardless of which system they use.

Our software stack should work on laptops, personal workstations, HPC clusters and in the cloud, which means we will need to support different CPUs, networks, GPUs, and so on. We hope to make this work for any Linux distribution and maybe even macOS and Windows via WSL, and a wide variety of CPU architectures (Intel, AMD, ARM, POWER, RISC-V).

Of course we want to focus on the performance of the software, but also on automating the workflow for maintaining the software stack, thoroughly testing the installations, and collaborating efficiently.

"},{"location":"overview/#inspiration","title":"Inspiration","text":"

The EESSI concept is heavily inspired by Compute Canada software stack, which is a shared software stack used on all 5 major national systems in Canada and a bunch of smaller ones.

The design of the Compute Canada software stack is discussed in detail in the PEARC'19 paper \"Providing a Unified Software Environment for Canada\u2019s National Advanced Computing Centers\".

It has also been presented at the 5th EasyBuild User Meetings (slides, recorded talk), and is well documented.

"},{"location":"overview/#layered-structure","title":"Layered structure","text":"

The EESSI project consists of 3 layers.

The bottom layer is the filesystem layer, which is responsible for distributing the software stack across clients.

The middle layer is a compatibility layer, which ensures that the software stack is compatible with multiple different client operating systems.

The top layer is the software layer, which contains the actual scientific software applications and their dependencies.

The host OS still provides a couple of things, like drivers for network and GPU, support for shared filesystems like GPFS and Lustre, a resource manager like Slurm, and so on.

"},{"location":"overview/#opportunities","title":"Opportunities","text":"

We hope to collaborate with interested parties across the HPC community, including HPC centres, vendors, consultancy companies and scientific software developers.

Through our software stack, HPC users can seamlessly hop between sites, since the same software is available everywhere.

We can leverage each others work with respect to providing tested and properly optimized scientific software installations more efficiently, and provide a platform for easy benchmarking of new systems.

By working together with the developers of scientific software we can provide vetted installations for the broad HPC community.

"},{"location":"overview/#challenges","title":"Challenges","text":"

There are many challenges in an ambitious project like this, including (but probably not limited to):

"},{"location":"overview/#current-status","title":"Current status","text":"

(June 2020)

We are actively working on the EESSI repository, and are organizing monthly meetings to discuss progress and next steps forward.

Keep an eye on our GitHub repositories at https://github.com/EESSI and our Twitter feed.

"},{"location":"partners/","title":"Project partners","text":""},{"location":"partners/#delft-university-of-technology-the-netherlands","title":"Delft University of Technology (The Netherlands)","text":""},{"location":"partners/#dell-technologies-europe","title":"Dell Technologies (Europe)","text":""},{"location":"partners/#eindhoven-university-of-technology","title":"Eindhoven University of Technology","text":""},{"location":"partners/#ghent-university-belgium","title":"Ghent University (Belgium)","text":""},{"location":"partners/#hpcnow-spain","title":"HPCNow! (Spain)","text":""},{"location":"partners/#julich-supercomputing-centre-germany","title":"J\u00fclich Supercomputing Centre (Germany)","text":""},{"location":"partners/#university-of-cambridge-united-kingdom","title":"University of Cambridge (United Kingdom)","text":""},{"location":"partners/#university-of-groningen-the-netherlands","title":"University of Groningen (The Netherlands)","text":""},{"location":"partners/#university-of-twente-the-netherlands","title":"University of Twente (The Netherlands)","text":""},{"location":"partners/#university-of-oslo-norway","title":"University of Oslo (Norway)","text":""},{"location":"partners/#university-of-bergen-norway","title":"University of Bergen (Norway)","text":""},{"location":"partners/#vrije-universiteit-amsterdam-the-netherlands","title":"Vrije Universiteit Amsterdam (The Netherlands)","text":""},{"location":"partners/#surf-the-netherlands","title":"SURF (The Netherlands)","text":""},{"location":"software_layer/","title":"Software layer","text":"

The top layer of the EESSI project is the software layer, which provides the actual scientific software installations.

To install the software we include in our stack, we use EasyBuild, a framework for installing scientific software on HPC systems. These installations are optimized for a particular system architecture (specific CPU and GPU generation).

To access these software installation we provide environment module files and use Lmod, a modern environment modules tool which has been widely adopted in the HPC community in recent years.

We leverage the archspec Python library to automatically select the best suited part of the software stack for a particular host, based on its system architecture.

The software layer is maintained through our https://github.com/EESSI/software-layer GitHub repository.

"},{"location":"software_testing/","title":"Software testing","text":"

This page has been replaced with test-suite, update your bookmarks!

"},{"location":"support/","title":"Getting support for EESSI","text":"

Thanks to the MultiXscale EuroHPC project we are able to provide support to the users of EESSI.

The EESSI support portal is hosted in GitLab: https://gitlab.com/eessi/support.

"},{"location":"support/#open-issue","title":"How to report a problem or ask a question","text":"

We recommend you to use a GitLab account if you want to get help from the EESSI support team.

If you have a GitLab account you can submit your problems or questions on EESSI via the issue tracker of the EESSI support portal at https://gitlab.com/eessi/support/-/issues. Please use one of the provided templates (report a problem, software request, question, ...) when creating an issue.

You can also contact us via our e-mail address support (@) eessi.io, which will automatically create a (private) issue in the EESSI support portal. When you send us an email, please provide us with as much information as possible on your question or problem. You can find an overview of the information that we would like to receive in the README of the EESSI support portal.

"},{"location":"support/#level-of-support","title":"Level of Support","text":"

We provide support for EESSI according to a \"reasonable effort\" standard. That means we will go into reasonable effort to help you, but we may not have the time to explore every potential cause, and it may not lead to a (quick) solution. You can compare this to the level of support you typically get from other active open source projects.

Note that the more complete your reported issue is (e.g. description of the error, what you ran, the software environment in which you ran, minimal reproducer, etc.) the bigger the chance is that we can help you with \"reasonable effort\".

"},{"location":"support/#what-do-we-provide-support-for","title":"What do we provide support for","text":""},{"location":"support/#accessing-and-using-the-eessi-software-stack","title":"Accessing and using the EESSI software stack","text":"

If you have trouble connecting to the software stack, such as trouble related to installing or configuring CernVM-FS to access the EESSI filesystem layer, or running the software installations included in the EESSI compatibility layer or software layer, please contact us.

Note that we can only help with problems related to the software installations (getting the software to run, to perform as expected, etc.). We do not provide support for using specific features of the provided software, nor can we fix (known or unknown) bugs in the software included in EESSI. We can only help with diagnosing and fixing problems that are caused by how the software was built and installed in EESSI.

"},{"location":"support/#software-requests","title":"Software requests","text":"

We are open to software requests for software that is not included in EESSI yet.

The quickest way to add additional software to EESSI is by contributing it yourself as a community contribution, please see the documentation on adding software.

Alternatively, you can send in a request to our support team. Please try to provide as much information on the software as possible: preferably use the issue template (which requires you to log in to GitLab), or make sure to cover the items listed here.

Be aware that we can only provide software that has an appropriate open source license.

"},{"location":"support/#eessi-test-suite","title":"EESSI test suite","text":"

If you are using the EESSI test suite, you can get help via the EESSI support portal.

"},{"location":"support/#build-and-deploy-bot","title":"Build-and-deploy bot","text":"

If you are using the EESSI build-and-deploy bot, you can get help via the EESSI support portal.

"},{"location":"support/#what-do-we-not-provide-support-for","title":"What do we not provide support for","text":"

Do not contact the EESSI support team to get help with using software that is included in EESSI, unless you think the problems you are seeing are related to how the software was built and installed.

Please consult the documentation of the software you are using, or contact the developers of the software directly, if you have questions regarding using the software, or if you think you have found a bug.

Funded by the European Union. This work has received funding from the European High Performance Computing Joint Undertaking (JU) and countries participating in the project under grant agreement No 101093169.

"},{"location":"systems/","title":"Systems on which EESSI is available natively","text":"

This page lists the HPC systems (that we know of) on which EESSI is available system-wide.

On these systems, you should be able to initialise your session environment for using EESSI as documented here, and you can try running our demos.

Please report additional systems on which EESSI is available

If you know of one or more systems on which EESSI is available system-wide that are not listed here yet, please let us know by contacting the EESSI support team, so we can update this page (or open a pull request).

What if EESSI is not available system-wide yet?

If EESSI is not available yet on the HPC system(s) that you use, contact the corresponding support team and submit a request to make it available.

You can point them to our documentation:

If they have any questions, please suggest to contact the EESSI support team.

In the meantime, you can try using one of the alternative ways of accessing EESSI, like using a container.

"},{"location":"systems/#eurohpc-ju-systems","title":"EuroHPC JU systems","text":"

EESSI is available on several of the EuroHPC JU supercomputers.

"},{"location":"systems/#karolina-czech-republic","title":"Karolina (Czech Republic)","text":"

Karolina is the EuroHPC JU supercomputer hosted by IT4Innovations.

"},{"location":"systems/#vega-slovenia","title":"Vega (Slovenia)","text":"

Vega is the EuroHPC JU supercomputer hosted by the Institute for Information Science (IZUM).

"},{"location":"systems/#other-european-systems","title":"Other European systems","text":""},{"location":"systems/#belgium","title":"Belgium","text":""},{"location":"systems/#ghent-university","title":"Ghent University","text":""},{"location":"systems/#vrije-universiteit-brussel","title":"Vrije Universiteit Brussel","text":""},{"location":"systems/#germany","title":"Germany","text":""},{"location":"systems/#embl-heidelberg","title":"EMBL Heidelberg","text":""},{"location":"systems/#university-of-stuttgart","title":"University of Stuttgart","text":""},{"location":"systems/#greece","title":"Greece","text":""},{"location":"systems/#aristotle-university-of-thessaloniki","title":"Aristotle University of Thessaloniki","text":""},{"location":"systems/#netherlands","title":"Netherlands","text":""},{"location":"systems/#surf","title":"SURF","text":""},{"location":"systems/#university-of-groningen","title":"University of Groningen","text":""},{"location":"systems/#norway","title":"Norway","text":""},{"location":"systems/#sigma2-as-norwegian-research-infrastructure-services","title":"Sigma2 AS / Norwegian Research Infrastructure Services","text":""},{"location":"talks/","title":"Talks related to EESSI","text":""},{"location":"talks/#2023","title":"2023","text":""},{"location":"adding_software/adding_development_software/","title":"Adding software to dev.eessi.io","text":"

dev.eessi.io is still in active development and focused on MultiXscale

The dev.eessi.io repository and functionality is still in its early stages. The repository itself and build + deploy procedure for it are functional, but may change often for the time being.

Our focus is currently on including and supporting developers and applications in the MultiXscale CoE.

"},{"location":"adding_software/adding_development_software/#what-is-deveessiio","title":"What is dev.eessi.io?","text":"

dev.eessi.io is the development repository of EESSI.

"},{"location":"adding_software/adding_development_software/#adding-software","title":"Adding software","text":"

Using dev.eessi.io is similar to using EESSI's production repository software.eessi.io. Software builds are triggered by a bot listening to pull requests in GitHub repositories. These builds require custom easyconfig and easystack files, which should be in specific directories.

To see this in practice, refer to the dev.eessi.io-example GitHub repository. In this GitHub repository you will find templates for some software installations with the appropriate directory structure, that is:

dev.eessi.io-example\n\u251c\u2500\u2500 easyconfigs\n\u2514\u2500\u2500 easystacks\n
"},{"location":"adding_software/adding_development_software/#quick-steps-to-build-for-deveessiio","title":"Quick steps to build for dev.eessi.io","text":""},{"location":"adding_software/adding_development_software/#installation-details","title":"Installation details","text":""},{"location":"adding_software/adding_development_software/#easyconfig-files-and-software-commit","title":"easyconfig files and --software-commit","text":"

The approach to build and install software is similar to that of software.eessi.io. It requires one or more easyconfig files. Easyconfig files used for building for dev.eessi.io do not need to be a part of an EasyBuild release, unlike builds for software.eessi.io. In this case, the development easyconfigs can be located under easyconfigs/ in the dev.eessi.io repository being used.

To allow for development builds, we leverage the --software-commit functionality (requires EasyBuild v4.9.3 or higher). This lets us build a given application from a specific commit in repository. This can also be done from a fork, by changing the github_account field in the easyconfig file. We've created a template for ESPResSo based on the standard eaasyconfig of the most recent version. The relevant fields are:

easyblock = 'CMakeMake'\n\nname = 'ESPResSo'\nversion = 'devel'\nversionsuffix = '-%(software_commit)s'\n\nhomepage = 'https://espressomd.org/wordpress'\ndescription = \"\"\"A software package for performing and analyzing scientific Molecular Dynamics simulations.\"\"\"\n\ngithub_account = 'espressomd'\nsource_urls = ['https://github.com/%(github_account)s/%(name)s/archive/']\n\nsources = ['%(software_commit)s.tar.gz']\n

--software-commit disables --robot

Using --software-commit disables the use of --robot, so make sure that you explicitly include new dependencies that might need to be installed. Otherwise, the easyconfig files won't be found.

You can also make additional changes to the easyconfig file, for example, if the new functionality requires new build or runtime dependencies, patches, configuration options, etc. It's a good idea to try installing from a specific commit locally first, to at least see if everything is parsed correctly and confirm that the right sources are being downloaded.

While the process to build for dev.eessi.io is similar to the one for the production repository, there are a few additional details to keep in mind.

"},{"location":"adding_software/adding_development_software/#software-version","title":"Software version","text":"

Installations to the EESSI production repository refer to specific versions of applications. However, development builds can't follow the same approach as they are most often not pegged to a release. Because of this, it is possible to include a descriptive \"version\" label to the version parameter in the easyconfig file for a given (set of) installations.

Note that some applications are built with custom easyblocks, which may use the version parameter to determine how the installation is meant to work (for example, recent versions need to copy files from to a new directory). Make sure that you account for this, otherwise you may install software differently than intended. If you encounter issues, you can open an issue in our support portal.

"},{"location":"adding_software/adding_development_software/#installing-dependencies","title":"Installing dependencies","text":"

Installations in dev.eessi.io are done on top of software.eessi.io. That means if your development build depends on some application that is already installed in software.eessi.io, then that will simply be used. However, if you need to add a new dependency, then this must included as part of the build. That means including an easyconfig file for it, and adding it to the right easystack file.

"},{"location":"adding_software/adding_development_software/#using-commit-ids-or-tags-for-software-commit","title":"Using commit IDs or tags for --software-commit","text":"

Installing with --software-commit requires that you include either a commit ID or a tag. The installation procedure will use this to obtain the sources for the build. Because tags can be changed to point to a different commit ID, we recommend you avoid using them and sticking to the commit ID itself. You can then include this in the versionsuffix on your easyconfig file, to generate a unique (though \"ugly\") module name.

"},{"location":"adding_software/adding_development_software/#patch-files","title":"Patch files","text":"

If your specific development build requires patch files, you should add these to the easyconfigs/ directory. If the necessary patch is part of an EasyBuild release, then this may not be necessary, as these will be directly taken from EasyBuild. If it is a new patch that is not on an EasyBuild release, then include it in the easyconfigs/ directory.

"},{"location":"adding_software/adding_development_software/#checksums","title":"Checksums","text":"

EasyBuild's easyconfig files typically contain checksums as their use is highly recommended. By default, EasyBuild will compute the checksums of sources and patch files it needs for a given installation, and compare them with the values in the easyconfig file. Because builds for dev.eessi.io change much more often, hard coded checksums become a problem, as they'd need to be updated with every new build. For this reason, we recommend not including checksums in your development easyconfig files (unless you need to, for a specific reason).

"},{"location":"adding_software/adding_development_software/#easystack-files","title":"Easystack files","text":"

After an easyconfig file has been created and added to the easyconfigs subdirectory, an easystack file that picks it up needs to be in place so that a build can be triggered.

Naming convention for easystack files

The easystack files must follow a naming convention and be named something like: software-eb-X.Y.Z-dev.yml, where X.Y.Z correspond to the EasyBuild version used to install the software. Following our example for ESPREsSo, it would look like:

easyconfigs:\n  - ESPResSo-devel-foss-2023a-software-commit.eb:\n      options:\n        software-commit: 2ba17de6096933275abec0550981d9122e4e5f28 # release 4.2.2\n

ESPResSo-devel-foss-2023a-software-commit.eb would be the name of the easyconfig file added in our example step above. Note the option passing the software-commit for the development version that should be built. For the sake of this example, the chosen commit actually corresponds to the 4.2.2 release of ESPResSo.

"},{"location":"adding_software/adding_development_software/#triggering-builds","title":"Triggering builds","text":"

We use the EESSI build-test-deploy bot to handle software builds. All one needs to do is open a PR with the changes adding the easyconfig and easystack files and commenting bot: build. This can only be done by previously authorized users. The current build cluster for dev.eessi.io builds only for the zen2 CPU microarchitecture, but this is likely to change.

Once a build is complete and the bot:deploy label is added, a staging PR can be merged to deploy the application to the dev.eessi.io cvmfs repository. On a system with dev.eessi.io mounted, then all that is left is to module use /cvmfs/dev.eessi.io/versions/2023.06/modules/all and try out the software!

There is currently no initialisation script or module for dev.eessi.io, but this feature is coming soon.

"},{"location":"adding_software/building_software/","title":"Building software","text":"

(for maintainers)

"},{"location":"adding_software/building_software/#bot_build","title":"Instructing the bot to build","text":"

Once the pull request is open, you can instruct the bot to build the software by posting a comment.

For more information, see the building section in the bot documentation.

Warning

Permission to trigger building of software must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/building_software/#guidelines","title":"Guidelines","text":""},{"location":"adding_software/building_software/#checking-the-builds","title":"Checking the builds","text":"

If all goes well, you should see SUCCESS for each build, along with button to get more information about the checks that were performed, and metadata information on the resulting artefact .

Note

Make sure the result is what you expect it to be for all builds before you deploy!

"},{"location":"adding_software/building_software/#failing-builds","title":"Failing builds","text":"

Warning

The bot will currently not give you any information on how or why a build is failing.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/building_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"

To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.

For more information, see the deploying section in the bot documentation.

Warning

Permission to trigger deployment of software installations must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/building_software/#merging-the-pull-request","title":"Merging the pull request","text":"

You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.

Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.

Note

This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml) that checks for missing installations, in the correct branch (for example 2023.06) of the software-layer.

If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!

Warning

You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/building_software/#getting-help","title":"Getting help","text":"

If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer channel of the EESSI Slack.

"},{"location":"adding_software/contribution_policy/","title":"Contribution policy","text":"

(version v0.1.0 - updated 9 Nov 2023)

Note

This policy is subject to change, please check back regularly.

"},{"location":"adding_software/contribution_policy/#purpose","title":"Purpose","text":"

The purpose of this contribution policy is to provide guidelines for adding software to EESSI.

It informs about what requirements must be met in order for software to be eligible for inclusion in the EESSI software layer.

"},{"location":"adding_software/contribution_policy/#requirements","title":"Requirements","text":"

The following requirements must be taken into account when adding software to EESSI.

Note that additional restrictions may apply in specific cases that are currently not covered explicitly by this policy.

"},{"location":"adding_software/contribution_policy/#freely_redistributable_software","title":"i) Freely redistributable software","text":"

Only freely redistributable software can be added to the EESSI repository, and we strongly prefer including only open source software in EESSI.

Make sure that you are aware of the relevant software licenses, and that redistribution of the software you want to add to EESSI is allowed.

For more information about a specific software license, see the SPDX license list.

Note

We intend to automatically verify that this requirement is met, by requiring that the SPDX license identifier is provided for all software included in EESSI.

"},{"location":"adding_software/contribution_policy/#built_by_bot","title":"ii) Built by the bot","text":"

All software included in the EESSI repository must be built autonomously by our bot .

For more information, see our semi-automatic software installation procedure.

"},{"location":"adding_software/contribution_policy/#easybuild","title":"iii) Built and installed with EasyBuild","text":"

We currently require that all software installations in EESSI are built and installed using EasyBuild.

We strongly prefer that the latest release of EasyBuild that is available at the time is used to add software to EESSI.

The use of --from-pr and --include-easyblocks-from-pr to pull in changes to EasyBuild that are required to make the installation work correctly in EESSI is allowed, but only if that is strictly required (that is, if those changes are not included yet in the latest EasyBuild release).

"},{"location":"adding_software/contribution_policy/#supported_toolchain","title":"iv) Supported compiler toolchain","text":"

A compiler toolchain that is still supported by the latest EasyBuild release must be used for building the software.

For more information on supported toolchains, see the EasyBuild toolchain support policy.

"},{"location":"adding_software/contribution_policy/#recent_toolchains","title":"v) Recent toolchain versions","text":"

We strongly prefer adding software to EESSI that was built with a recent compiler toolchain.

When adding software to a particular version of EESSI, you should use a toolchain version that is already installed.

If you would like to see an additional toolchain version being added to a particular version of EESSI, please open a support request for this, and motivate your request.

"},{"location":"adding_software/contribution_policy/#recent_software_versions","title":"vi) Recent software versions","text":"

We strongly prefer adding sufficiently recent software versions to EESSI.

If you would like to add older software versions, please clearly motivate the need for this in your contribution.

"},{"location":"adding_software/contribution_policy/#cpu_targets","title":"vii) CPU targets","text":"

Software that is added to EESSI should work on all supported CPU targets.

Exceptions to this requirement are allowed if technical problems that can not be resolved with reasonable effort prevent the installation of the software for specific CPU targets.

"},{"location":"adding_software/contribution_policy/#testing","title":"viii) Testing","text":"

We should be able to test the software installations via the EESSI test suite, in particular for software applications and user-facing tools.

Ideally one or more tests are available that verify that the software is functionally correct, and that it (still) performs well.

Tests that are run during the software installation procedure as performed by EasyBuild must pass. Exceptions can be made if only a small subset of tests fail for specific CPU targets, as long as these exceptions are tracked and an effort is made to assess the impact of those failing tests.

It should be possible to run a minimal smoke test for the software included in EESSI, for example using EasyBuild's --sanity-check-only feature.

Note

The EESSI test suite is still in active development, and currently only has a minimal set of tests available.

When the test suite is more mature, this requirement will be enforced more strictly.

"},{"location":"adding_software/contribution_policy/#changelog","title":"Changelog","text":""},{"location":"adding_software/contribution_policy/#v010-9-nov-2023","title":"v0.1.0 (9 Nov 2023)","text":""},{"location":"adding_software/debugging_failed_builds/","title":"Debugging failed builds","text":"

(for contributors + maintainers)

Unfortunately, software does not always build successfully. Since EESSI targets novel CPU architectures as well, build failures on such platforms are quite common, as the software and/or the software build systems have not always been adjusted to support these architectures yet.

In EESSI, all software packages are built by a bot. This is great for builds that complete successfully as we can build many software packages for a wide range of hardware with little human intervention. However, it does mean that you, as contributor, can not easily access the build directory and build logs to figure out build issues.

This page describes how you can interactively reproduce failed builds, so that you can more easily debug the issue.

Throughout this page, we will use this PR as an example. It intends to add LAMMPS to EESSI. Among other issues, it failed on a building Plumed.

"},{"location":"adding_software/debugging_failed_builds/#prerequisites","title":"Prerequisites","text":"

You will need to have:

"},{"location":"adding_software/debugging_failed_builds/#preparing-the-environment","title":"Preparing the environment","text":"

A number of steps are needed to create the same environment in which the bot builds.

"},{"location":"adding_software/debugging_failed_builds/#fetching-the-feature-branch","title":"Fetching the feature branch","text":"

Looking at the example PR, we see the PR is created from this fork. First, we clone the fork, then checkout the feature branch (LAMMPS_23Jun2022)

git clone https://github.com/laraPPr/software-layer/\ncd software-layer\ngit checkout LAMMPS_23Jun2022\n
Alternatively, if you already have a clone of the software-layer you can add it as a new remote
cd software-layer\ngit remote add laraPPr https://github.com/laraPPr/software-layer/\ngit fetch laraPPr\ngit checkout LAMMPS_23Jun2022\n

"},{"location":"adding_software/debugging_failed_builds/#starting-a-shell-in-the-eessi-container","title":"Starting a shell in the EESSI container","text":"

Simply run the EESSI container (eessi_container.sh), which should be in the root of the software-layer repository. Use -r to specify which EESSI repository (e.g. software.eessi.io, dev.eessi.io, ...) should be mounted in the container

./eessi_container.sh --access rw -r software.eessi.io\n

If you want to install NVIDIA GPU software, make sure to also add the --nvidia all argument, to insure that your GPU drivers get mounted inside the container:

./eessi_container.sh --access rw -r software.eessi.io --nvidia all\n

Note

You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS: have been printed after the first prompt Apptainer> was shown.

"},{"location":"adding_software/debugging_failed_builds/#more-efficient-approach-for-multiplecontinued-debugging-sessions","title":"More efficient approach for multiple/continued debugging sessions","text":"

While the above works perfectly well, you might not be able to complete your debugging session in one go. With the above approach, several steps will just be repeated every time you start a debugging session:

To avoid this, we create two directories. One holds the container & host_injections, which are (typically) common between multiple PRs and thus you don't have to redownload the container / reinstall the host_injections if you start working on another PR. The other will hold the PR-specific data: a tarball storing the software you'll build in your interactive debugging session. The paths we pick here are just example, you can pick any persistent, writeable location for this:

eessi_common_dir=${HOME}/eessi-manual-builds\neessi_pr_dir=${HOME}/pr360\n

Now, we start the container

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}\n

Here, the SINGULARITY_CACHEDIR makes sure that if the container was already downloaded, and is present in the cache, it is not redownloaded. The host injections will just be picked up from ${eessi_common_dir}/host_injections (if those were already installed before). And finally, the --save makes sure that everything that you build in the container gets stored in a tarball as soon as you exit the container.

Note that the first exit command will first make you exit the Gentoo prefix environment. Only the second will take you out of the container, and print where the tarball will be stored:

[EESSI 2023.06] $ exit\nlogout\nLeaving Gentoo Prefix with exit status 1\nApptainer> exit\nexit\nSaved contents of tmp directory '/tmp/eessi-debug.VgLf1v9gf0' to tarball '${HOME}/pr360/EESSI-1698056784.tgz' (to resume session add '--resume ${HOME}/pr360/EESSI-1698056784.tgz')\n

Note that the tarballs can be quite sizeable, so make sure to pick a filesystem where you have a large enough quotum.

Next time you want to continue investigating this issue, you can start the container with --resume DIR/TGZ and continue where you left off, having all dependencies already built and available.

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}/EESSI-1698056784.tgz\n

For a detailed description on using the script eessi_container.sh, see here.

Note

Reusing a previously downloaded container, or existing CUDA installation from a host_injections is not be a good approach if those could be the cause of your issues. If you are unsure if this is the case, simply follow the regular approach to starting the EESSI container.

Note

It is recommended to clean the container cache and host_injections directories every now and again, to make sure you pick up the latest changes for those two components.

"},{"location":"adding_software/debugging_failed_builds/#start-the-gentoo-prefix-environment","title":"Start the Gentoo Prefix environment","text":"

The next step is to start the Gentoo Prefix environment.

First, you'll have to set which repository and version of EESSI you are building for. For example:

export EESSI_CVMFS_REPO=/cvmfs/software.eessi.io\nexport EESSI_VERSION=2023.06\n

Then, we set EESSI_OS_TYPE and EESSI_CPU_FAMILY and run the startprefix command to start the Gentoo Prefix environment:

export EESSI_OS_TYPE=linux  # We only support Linux for now\nexport EESSI_CPU_FAMILY=$(uname -m)\n${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/startprefix\n

Unfortunately, there is no way to retain the ${EESSI_CVMFS_REPO} and ${EESSI_VERSION} in your prefix environment, so we have to set them again. For example:

export EESSI_CVMFS_REPO=/cvmfs/software.eessi.io\nexport EESSI_VERSION=2023.06\n

Note

By activating the Gentoo Prefix environment, the system tools (e.g. ls) you would normally use are now provided by Gentoo Prefix, instead of the container OS. E.g. running which ls after starting the prefix environment as above will return /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/bin/ls. This makes the builds completely independent from the container OS.

"},{"location":"adding_software/debugging_failed_builds/#building-for-the-generic-optimization-target","title":"Building for the generic optimization target","text":"

If you want to replicate a build with generic optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic) you will need to set the following environment variable:

export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic\n

"},{"location":"adding_software/debugging_failed_builds/#building-software-with-the-eessi-install-softwaresh-script","title":"Building software with the EESSI-install-software.sh script","text":"

The Automatic build and deploy bot installs software by executing the EESSI-install-software.sh script. The advantage is that running this script is the closest you can get to replicating the bot's behaviour - and thus the failure. The downside is that if a PR adds a lot of software, it may take quite a long time to run - even if you might already know what the problematic software package is. In that case, you might be better off following the steps under Building software from an easystack file or Building an individual package.

Note that you could also combine approaches: first build everything using the EESSI-install-software.sh script, until you reproduce the failure. Then, start making modifications (e.g. changes to the EasyConfig, patches, etc) and trying to rebuild that package individually to test your changes.

To build software using the EESSI-install-software.sh script, you'll first need to get the diff file for the PR. This is used by the EESSI-install-software.sh script to see what is changed in this PR - and thus what needs to be build for this PR. To download the diff for PR 360, we would e.g. do

wget https://github.com/EESSI/software-layer/pull/360.diff\n

Now, we run the EESSI-install-software.sh script:

./EESSI-install-software.sh\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-from-an-easystack-file","title":"Building software from an easystack file","text":""},{"location":"adding_software/debugging_failed_builds/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"

To activate the software environment, run

source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n

Note

If you get an error bash: /versions//init/bash: No such file or directory, you forgot to reset the ${EESSI_CVMFS_REPO} and ${EESSI_VERSION} environment variables at the end of the previous step.

Note

If you want to build with generic optimization, you should run export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic before sourcing.

For more info on starting the EESSI software environment, see here

"},{"location":"adding_software/debugging_failed_builds/#configure-easybuild","title":"Configure EasyBuild","text":"

It is important that we configure EasyBuild in the same way as the bot uses it, with one small exceptions: our working directory will be different. Typically, that doesn't matter, but it's good to be aware of this one difference, in case you fail to replicate the build failure.

In this example, we create a unique temporary directory inside /tmp to serve both as our workdir. Finally, we will source the configure_easybuild script, which will configure EasyBuild by setting environment variables.

export WORKDIR=$(mktemp --directory --tmpdir=/tmp  -t eessi-debug.XXXXXXXXXX)\nsource scripts/utils.sh && source configure_easybuild\n
Among other things, the configure_easybuild script sets the install path for EasyBuild to point to the correct installation directory in (to ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_SOFTWARE_SUBDIR}). This is the exact same path the bot uses to build, and uses a writeable overlay filesystem in the container to write to a path in /cvmfs (which normally is read-only). This is identical to what the bot does.

Note

If you started the container using --resume, you may want WORKDIR to point to the workdir you created previously (instead of creating a new, temporary directory with mktemp).

Note

If you want to replicate a build with generic optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic) you will need to set export EASYBUILD_OPTARCH=GENERIC after sourcing configure_easybuild.

Next, we need to determine the correct version of EasyBuild to load. Since the example PR changes the file eessi-2023.06-eb-4.8.1-2021b.yml, this tells us the bot was using version 4.8.1 of EasyBuild to build this. Thus, we load that version of the EasyBuild module and check if everything was configured correctly:

module load EasyBuild/4.8.1\neb --show-config\n
You should get something similar to

#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath            (E) = /tmp/easybuild/easybuild/build\ncontainerpath        (E) = /tmp/easybuild/easybuild/containers\ndebug                (E) = True\nexperimental         (E) = True\nfilter-deps          (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib, Yasm\nfilter-env-vars      (E) = LD_LIBRARY_PATH\nhooks                (E) = ${HOME}/software-layer/eb_hooks.py\nignore-osdeps        (E) = True\ninstallpath          (E) = /tmp/easybuild/software/linux/aarch64/neoverse_n1\nmodule-extensions    (E) = True\npackagepath          (E) = /tmp/easybuild/easybuild/packages\nprefix               (E) = /tmp/easybuild/easybuild\nread-only-installdir (E) = True\nrepositorypath       (E) = /tmp/easybuild/easybuild/ebfiles_repo\nrobot-paths          (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/aarch64/neoverse_n1/software/EasyBuild/4.8.1/easybuild/easyconfigs\nrpath                (E) = True\nsourcepath           (E) = /tmp/easybuild/easybuild/sources:\nsysroot              (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/aarch64\ntrace                (E) = True\nzip-logs             (E) = bzip2\n
"},{"location":"adding_software/debugging_failed_builds/#building-everything-in-the-easystack-file","title":"Building everything in the easystack file","text":"

In our example PR, the easystack file that was changed was eessi-2023.06-eb-4.8.1-2021b.yml. To build this, we run (in the directory that contains the checkout of this feature branch):

eb --easystack eessi-2023.06-eb-4.8.1-2021b.yml --robot\n
After some time, this build fails while trying to build Plumed, and we can access the build log to look for clues on why it failed.

"},{"location":"adding_software/debugging_failed_builds/#building-an-individual-package","title":"Building an individual package","text":"

First, prepare the environment by following the [Starting the EESSI software environment][#starting-the-eessi-software-environment] and Configure EasyBuild above.

In our example PR, the individual package that was added to eessi-2023.06-eb-4.8.1-2021b.yml was LAMMPS-23Jun2022-foss-2021b-kokkos.eb. To mimic the build behaviour, we'll also have to (re)use any options that are listed in the easystack file for LAMMPS-23Jun2022-foss-2021b-kokkos.eb, in this case the option --from-pr 19000. Thus, to build, we run:

eb LAMMPS-23Jun2022-foss-2021b-kokkos.eb --robot --from-pr 19000\n
After some time, this build fails while trying to build Plumed, and we can access the build log to look for clues on why it failed.

Note

While this might be faster than the easystack-based approach, this is not how the bot builds. So why it may reproduce the failure the bot encounters, it may not reproduce the bug at all (no failure) or run into different bugs. If you want to be sure, use the easystack-based approach.

"},{"location":"adding_software/debugging_failed_builds/#rebuilding-software","title":"Rebuilding software","text":"

Rebuilding software requires an additional step at the beginning: the software first needs to be removed. We assume you've already checked out the feature branch. Then, you need to start the container with the additional --fakeroot argument, otherwise you will not be able to remove files from the /cvmfs prefix. Make sure to also include the --save argument, as we will need the tarball later on. E.g.

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir} --fakeroot\n
Then, initialize the EESSI environment
source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n
and get the diff file for the corresponding PR, e.g. for PR 123:
wget https://github.com/EESSI/software-layer/pull/123.diff\n
Finally, run the EESSI-remove-software.sh script
./EESSI-remove-software.sh`\n

This should remove any software specified in a rebuild easystack that got added in your current feature branch.

Now, exit the container, paying attention to the instructions that are printed to resume later, e.g.:

Saved contents of tmp directory '/tmp/eessi.WZxeFUemH2' to tarball '/home/myuser/pr507/EESSI-1711538681.tgz' (to resume session add '--resume /home/myuser/pr507/EESSI-1711538681.tgz')\n

Now, continue with the original instructions to start the container (i.e. either here or with this alternate approach) and make sure to add the --resume flag. This way, you are resuming from the tarball (i.e. with the software removed that has to be rebuilt), but in a new container in which you have regular (i.e. no root) permissions.

"},{"location":"adding_software/debugging_failed_builds/#running-the-test-step","title":"Running the test step","text":"

If you are still in the prefix layer (i.e. after previously building something), exit it first:

$ exit\nlogout\nLeaving Gentoo Prefix with exit status 0\n
Then, source the EESSI init script (again):
Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} Apptainer>\n

Note

If you are in a SLURM environment, make sure to run for i in $(env | grep SLURM); do unset \"${i%=*}\"; done to unset any SLURM environment variables. Failing to do so will cause mpirun to pick up on these and e.g. infer how many slots are available. If you run into errors of the form \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\", you probably forgot this step.

Then, execute the run_tests.sh script. We are assuming you are still in the root of the software-layer repository that you cloned earlier:

./run_tests.sh\n
if all goes well, you should see (part of) the EESSI test suite being run by ReFrame, finishing with something like

[  PASSED  ] Ran X/Y test case(s) from Z check(s) (0 failure(s), 0 skipped, 0 aborted)\n

Note

If you are running on a system with hyperthreading enabled, you may still run into the \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\" error from mpirun, because hardware threads are not considered to be slots by default by OpenMPIs mpirun. In this case, run with OMPI_MCA_hwloc_base_use_hwthreads_as_cpus=1 ./run_tests.sh (for OpenMPI 4.X) or PRTE_MCA_rmaps_default_mapping_policy=:hwtcpus ./run_tests.sh (for OpenMPI 5.X).

"},{"location":"adding_software/debugging_failed_builds/#known-causes-of-issues-in-eessi","title":"Known causes of issues in EESSI","text":""},{"location":"adding_software/debugging_failed_builds/#the-custom-system-prefix-of-the-compatibility-layer","title":"The custom system prefix of the compatibility layer","text":"

Some installations might expect the system root (sysroot, for short) to be in /. However, in case of EESSI, we are building against the OS in the compatibility layer. Thus, our sysroot is something like ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}. This can cause issues if installation procedures assume the sysroot is in /.

One example of a sysroot issue was in installing wget. The EasyConfig for wget defined

# make sure pkg-config picks up system packages (OpenSSL & co)\npreconfigopts = \"export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl '\n
This will not work in EESSI, since the OpenSSL should be picked up from the compatibility layer. This was fixed by changing the EasyConfig to read
preconfigopts = \"export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl\n
The %(sysroot)s is a template value which EasyBuild will resolve to the value that has been configured in EasyBuild for sysroot (it is one of the fields printed by eb --show-config if a non-standard sysroot is configured).

If you encounter issues where the installation can not find something that is normally provided by the OS (i.e. not one of the dependencies in your module environment), you may need to resort to a similar approach.

"},{"location":"adding_software/debugging_failed_builds/#the-writeable-overlay","title":"The writeable overlay","text":"

The writeable overlay in the container is known to be a bit slow sometimes. Thus, we have seen tests failing because they exceed some timeout (e.g. this issue).

To investigate if the writeable overlay is somehow the issue, you can make sure the installation gets done somewhere else, e.g. in the temporary directory in /tmp that you created as workdir. To do this, set

export EASYBUILD_INSTALLPATH=${WORKDIR}\n

after the step in which you have sourced the configure_easybuild script. Note that in order to find (with module av) any modules that get installed here, you will need to add this path to the MODULEPATH:

module use ${EASYBUILD_INSTALLPATH}/modules/all\n

Then, retry building the software (as described above). If the build now succeeds, you know that indeed the writeable overlay caused the issue. We have to build in this writeable overlay when we do real deployments. Thus, if you hit such a timeout, try to see if you can (temporarily) modify the timeout value in the test so that it passes.

"},{"location":"adding_software/deploying_software/","title":"Deploying software","text":"

(for maintainers)

"},{"location":"adding_software/deploying_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"

To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.

For more information, see the deploying section in the bot documentation.

Warning

Permission to trigger deployment of software installations must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/deploying_software/#merging-the-pull-request","title":"Merging the pull request","text":"

You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.

Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.

Note

This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml) that checks for missing installations, in the correct branch (for example 2023.06) of the software-layer.

If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!

Warning

You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/deploying_software/#getting-help","title":"Getting help","text":"

If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer channel of the EESSI Slack.

"},{"location":"adding_software/opening_pr/","title":"Opening a pull request","text":"

(for contributors)

To add software to EESSI, you should go through the semi-automatic software installation procedure by:

Warning

Make sure you are also aware of our contribution policy when adding software to EESSI.

"},{"location":"adding_software/opening_pr/#preparation","title":"Preparation","text":"

Before you can make a pull request to the software-layer, you should fork the repository in your GitHub account.

For the remainder of these instructions, we assume that your GitHub account is @koala .

Note

Don't forget to replace koala with the name of your GitHub account in the commands below!

1) Clone the EESSI/software-layer repository:

mkdir EESSI\ncd EESSI\ngit clone https://github.com/EESSI/software-layer\ncd software-layer\n

2) Add your fork as a remote

git remote add koala git@github.com:koala/software-layer.git\n

3) Check out the branch that corresponds to the version of EESSI repository you want to add software to, for example 2023.06-software.eessi.io:

git checkout 2023.06-software.eessi.io\n

Note

The commands above only need to be run once, to prepare your setup for making pull requests.

"},{"location":"adding_software/opening_pr/#software_layer_pull_request","title":"Creating a pull request","text":"

1) Make sure that your 2023.06-software.eessi.io branch in the checkout of the EESSI/software-layer repository is up-to-date

cd EESSI/software-layer\ngit checkout 2023.06-software.eessi.io \ngit pull origin 2023.06-software.eessi.io \n

2) Create a new branch (use a sensible name, not example_branch as below), and check it out

git checkout -b example_branch\n

3) Determine the correct easystack file to change, and add one or more lines to it that specify which easyconfigs should be installed

echo '  - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\n
Note that the naming scheme is standardized and should be eessi-<eessi_version>-eb-<eb_version>-<toolchain_version>.yml. See the official EasyBuild documentation on easystack files for more information on the syntax.

4) Stage and commit the changes into your your branch with a sensible message

git add easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\ngit commit -m \"{2023.06}[GCC/12.3.0] example 1.2.3\"\n

5) Push your branch to your fork of the software-layer repository

git push koala example_branch\n

6) Go to the GitHub web interface to open your pull request, or use the helpful link that should show up in the output of the git push command.

Make sure you target the correct branch: the one that corresponds to the version of EESSI you want to add software to (like 2023.06-software.eessi.io).

If all goes well, one or more bots should almost instantly create a comment in your pull request with an overview of how it is configured - you will need this information when providing build instructions.

"},{"location":"adding_software/opening_pr/#rebuilding_software","title":"Rebuilding software","text":"

We typically do not rebuild software, since (strictly speaking) this breaks reproducibility for anyone using the software. However, there are certain situations in which it is difficult or impossible to avoid.

To do a rebuild, you add the software you want to rebuild to a dedicated easystack file in the rebuilds directory. Use the following naming convention: YYYYMMDD-eb-<EB_VERSION>-<APPLICATION_NAME>-<APPLICATION_VERSION>-<SHORT_DESCRIPTION>.yml, where YYYYMMDD is the opening date of your PR. E.g. 2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml was added in a PR on the 6th of May 2024 and used to rebuild CUDA-12.1.1 using EasyBuild 4.9.1 to resolve an issue with some runtime libraries missing from the initial CUDA 12.1.1 installation.

At the top of your easystack file, please use comments to include a short description, and make sure to include any relevant links to related issues (e.g. from the GitHub repositories of EESSI, EasyBuild, or the software you are rebuilding).

As an example, consider the full easystack file (2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml) used for the aforementioned CUDA rebuild:

# 2024.05.06\n# Original matching of files we could ship was not done correctly. We were\n# matching the basename for files (e.g., libcudart.so from libcudart.so.12)\n# rather than the name stub (libcudart)\n# See https://github.com/EESSI/software-layer/pull/559\neasyconfigs:\n  - CUDA-12.1.1.eb:\n        options:\n                accept-eula-for: CUDA\n

By separating rebuilds in dedicated files, we still maintain a complete software bill of materials: it is transparent what got rebuilt, for which reason, and when.

"},{"location":"adding_software/overview/","title":"Overview of adding software to EESSI","text":"

We welcome contributions to the EESSI software stack. This page shows the procedure and provides links to the contribution policy and the technical details of making a contribution.

"},{"location":"adding_software/overview/#contribute-a-software-to-the-eessi-software-stack","title":"Contribute a software to the EESSI software stack","text":"
\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n    I(contributor)  \n    K(reviewer)\n    A(Is there an EasyConfig for software) -->|No|B(Create an EasyConfig and contribute it to EasyBuild)\n    A --> |Yes|D(Create a PR to software-layer)\n    B --> C(Evaluate and merge pull request)\n    C --> D\n    D --> E(Review PR & trigger builds)\n    E --> F(Debug build issue if needed)\n    F --> G(Deploy tarballs to S3 bucket)\n    G --> H(Ingest tarballs in EESSI by merging staging PRs)\n     classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n     class A,B,D,F,I blue\n     click B \"https://easybuild.io/\"\n     click D \"../opening_pr/\"\n     click F \"../debugging_failed_builds/\"\n
"},{"location":"adding_software/overview/#contributing-a-reframe-test-to-the-eessi-test-suite","title":"Contributing a ReFrame test to the EESSI test suite","text":"

Ideally, a contributor prepares a ReFrame test for the software to be added to the EESSI software stack.

\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n\n    Z(Create ReFrame test & PR to tests-suite) --> Y(Review PR & run new test)\n    Y --> W(Debug issue if needed) \n    W --> V(Review PR if needed)\n    V --> U(Merge PR)\n     classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n     class Z,W blue\n
"},{"location":"adding_software/overview/#more-about-adding-software-to-eessi","title":"More about adding software to EESSI","text":"

If you need help with adding software to EESSI, please open a support request.

"},{"location":"available_software/overview/","title":"Available software (via modules)","text":"

This table gives an overview of all the available software in EESSI per specific CPU target.

Name aarch64 x86_64 amd intel generic neoverse_n1 neoverse_v1 generic zen2 zen3 zen4 haswell skylake_avx512"},{"location":"available_software/detail/ALL/","title":"ALL","text":"

A Load Balancing Library (ALL) aims to provide an easy way to include dynamicdomain-based load balancing into particle based simulation codes. The libraryis developed in the Simulation Laboratory Molecular Systems of the J\u00fclichSupercomputing Centre at Forschungszentrum J\u00fclich.

https://gitlab.jsc.fz-juelich.de/SLMS/loadbalancing

"},{"location":"available_software/detail/ALL/#available-modules","title":"Available modules","text":"

The overview below shows which ALL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ALL, load one of these modules using a module load command like:

module load ALL/0.9.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ALL/0.9.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/AOFlagger/","title":"AOFlagger","text":"

The AOFlagger is a tool that can find and remove radio-frequency interference (RFI)in radio astronomical observations. It can make use of Lua scripts to make flagging strategies flexible,and the tools are applicable to a wide set of telescopes.

https://aoflagger.readthedocs.io/

"},{"location":"available_software/detail/AOFlagger/#available-modules","title":"Available modules","text":"

The overview below shows which AOFlagger installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using AOFlagger, load one of these modules using a module load command like:

module load AOFlagger/3.4.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 AOFlagger/3.4.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/ASE/","title":"ASE","text":"

ASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language.From version 3.20.1 we also include the ase-ext package, it contains optional reimplementationsin C of functions in ASE. ASE uses it automatically when installed.

https://wiki.fysik.dtu.dk/ase

"},{"location":"available_software/detail/ASE/#available-modules","title":"Available modules","text":"

The overview below shows which ASE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ASE, load one of these modules using a module load command like:

module load ASE/3.22.1-gfbf-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ASE/3.22.1-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/ASE/#ase3221-gfbf-2022b","title":"ASE/3.22.1-gfbf-2022b","text":"

This is a list of extensions included in the module:

ase-3.22.1, ase-ext-20.9.0, pytest-mock-3.8.2

"},{"location":"available_software/detail/ATK/","title":"ATK","text":"

ATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.

https://developer.gnome.org/atk/

"},{"location":"available_software/detail/ATK/#available-modules","title":"Available modules","text":"

The overview below shows which ATK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ATK, load one of these modules using a module load command like:

module load ATK/2.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ATK/2.38.0-GCCcore-13.2.0 x x x x x x x x x ATK/2.38.0-GCCcore-12.3.0 x x x x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Abseil/","title":"Abseil","text":"

Abseil is an open-source collection of C++ library code designed to augment theC++ standard library. The Abseil library code is collected from Google's ownC++ code base, has been extensively tested and used in production, and is thesame code we depend on in our daily coding lives.

https://abseil.io/

"},{"location":"available_software/detail/Abseil/#available-modules","title":"Available modules","text":"

The overview below shows which Abseil installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Abseil, load one of these modules using a module load command like:

module load Abseil/20240116.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Abseil/20240116.1-GCCcore-13.2.0 x x x x x x x x x Abseil/20230125.3-GCCcore-12.3.0 x x x x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Archive-Zip/","title":"Archive-Zip","text":"

Provide an interface to ZIP archive files.

https://metacpan.org/pod/Archive::Zip

"},{"location":"available_software/detail/Archive-Zip/#available-modules","title":"Available modules","text":"

The overview below shows which Archive-Zip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Archive-Zip, load one of these modules using a module load command like:

module load Archive-Zip/1.68-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Archive-Zip/1.68-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Armadillo/","title":"Armadillo","text":"

Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.

https://arma.sourceforge.net/

"},{"location":"available_software/detail/Armadillo/#available-modules","title":"Available modules","text":"

The overview below shows which Armadillo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Armadillo, load one of these modules using a module load command like:

module load Armadillo/12.8.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Armadillo/12.8.0-foss-2023b x x x x x x x x x Armadillo/12.6.2-foss-2023a x x x x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Arrow/","title":"Arrow","text":"

Apache Arrow (incl. PyArrow Python bindings), a cross-language development platform for in-memory data.

https://arrow.apache.org

"},{"location":"available_software/detail/Arrow/#available-modules","title":"Available modules","text":"

The overview below shows which Arrow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Arrow, load one of these modules using a module load command like:

module load Arrow/16.1.0-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Arrow/16.1.0-gfbf-2023b x x x x x x x x x Arrow/14.0.1-gfbf-2023a x x x x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/Arrow/#arrow1610-gfbf-2023b","title":"Arrow/16.1.0-gfbf-2023b","text":"

This is a list of extensions included in the module:

pyarrow-16.1.0

"},{"location":"available_software/detail/Arrow/#arrow1401-gfbf-2023a","title":"Arrow/14.0.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

pyarrow-14.0.1

"},{"location":"available_software/detail/BCFtools/","title":"BCFtools","text":"

Samtools is a suite of programs for interacting with high-throughput sequencing data. BCFtools - Reading/writing BCF2/VCF/gVCF files and calling/filtering/summarising SNP and short indel sequence variants

https://www.htslib.org/

"},{"location":"available_software/detail/BCFtools/#available-modules","title":"Available modules","text":"

The overview below shows which BCFtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BCFtools, load one of these modules using a module load command like:

module load BCFtools/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BCFtools/1.18-GCC-12.3.0 x x x x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BLAST%2B/","title":"BLAST+","text":"

Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.

https://blast.ncbi.nlm.nih.gov/

"},{"location":"available_software/detail/BLAST%2B/#available-modules","title":"Available modules","text":"

The overview below shows which BLAST+ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BLAST+, load one of these modules using a module load command like:

module load BLAST+/2.14.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLAST+/2.14.1-gompi-2023a x x x x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/BLIS/","title":"BLIS","text":"

BLIS is a portable software framework for instantiating high-performanceBLAS-like dense linear algebra libraries.

https://github.com/flame/blis/

"},{"location":"available_software/detail/BLIS/#available-modules","title":"Available modules","text":"

The overview below shows which BLIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BLIS, load one of these modules using a module load command like:

module load BLIS/0.9.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLIS/0.9.0-GCC-13.2.0 x x x x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BWA/","title":"BWA","text":"

Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.

http://bio-bwa.sourceforge.net/

"},{"location":"available_software/detail/BWA/#available-modules","title":"Available modules","text":"

The overview below shows which BWA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BWA, load one of these modules using a module load command like:

module load BWA/0.7.18-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BWA/0.7.18-GCCcore-12.3.0 x x x x x x x x x BWA/0.7.17-20220923-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BamTools/","title":"BamTools","text":"

BamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.

https://github.com/pezmaster31/bamtools

"},{"location":"available_software/detail/BamTools/#available-modules","title":"Available modules","text":"

The overview below shows which BamTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BamTools, load one of these modules using a module load command like:

module load BamTools/2.5.2-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BamTools/2.5.2-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bazel/","title":"Bazel","text":"

Bazel is a build tool that builds code quickly and reliably.It is used to build the majority of Google's software.

https://bazel.io/

"},{"location":"available_software/detail/Bazel/#available-modules","title":"Available modules","text":"

The overview below shows which Bazel installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bazel, load one of these modules using a module load command like:

module load Bazel/6.3.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bazel/6.3.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/","title":"BeautifulSoup","text":"

Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping.

https://www.crummy.com/software/BeautifulSoup

"},{"location":"available_software/detail/BeautifulSoup/#available-modules","title":"Available modules","text":"

The overview below shows which BeautifulSoup installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BeautifulSoup, load one of these modules using a module load command like:

module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/#beautifulsoup4122-gcccore-1230","title":"BeautifulSoup/4.12.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

BeautifulSoup-4.12.2, soupsieve-2.4.1

"},{"location":"available_software/detail/Bio-DB-HTS/","title":"Bio-DB-HTS","text":"

Read files using HTSlib including BAM/CRAM, Tabix and BCF database files

https://metacpan.org/release/Bio-DB-HTS

"},{"location":"available_software/detail/Bio-DB-HTS/#available-modules","title":"Available modules","text":"

The overview below shows which Bio-DB-HTS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bio-DB-HTS, load one of these modules using a module load command like:

module load Bio-DB-HTS/3.01-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bio-DB-HTS/3.01-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bio-SearchIO-hmmer/","title":"Bio-SearchIO-hmmer","text":"

Code to parse output from hmmsearch, hmmscan, phmmer and nhmmer, compatiblewith both version 2 and version 3 of the HMMER package from http://hmmer.org.

https://metacpan.org/pod/Bio::SearchIO::hmmer3

"},{"location":"available_software/detail/Bio-SearchIO-hmmer/#available-modules","title":"Available modules","text":"

The overview below shows which Bio-SearchIO-hmmer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

module load Bio-SearchIO-hmmer/1.7.3-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bio-SearchIO-hmmer/1.7.3-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BioPerl/","title":"BioPerl","text":"

Bioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.

https://bioperl.org/

"},{"location":"available_software/detail/BioPerl/#available-modules","title":"Available modules","text":"

The overview below shows which BioPerl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BioPerl, load one of these modules using a module load command like:

module load BioPerl/1.7.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BioPerl/1.7.8-GCCcore-12.3.0 x x x x x x x x x BioPerl/1.7.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BioPerl/#bioperl178-gcccore-1230","title":"BioPerl/1.7.8-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Bio::Procedural-1.7.4, BioPerl-1.7.8, XML::Writer-0.900

"},{"location":"available_software/detail/BioPerl/#bioperl178-gcccore-1220","title":"BioPerl/1.7.8-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Bio::Procedural-1.7.4, BioPerl-1.7.8, XML::Writer-0.900

"},{"location":"available_software/detail/Biopython/","title":"Biopython","text":"

Biopython is a set of freely available tools for biological computation written in Python by an international team of developers. It is a distributed collaborative effort to develop Python libraries and applications which address the needs of current and future work in bioinformatics.

https://www.biopython.org

"},{"location":"available_software/detail/Biopython/#available-modules","title":"Available modules","text":"

The overview below shows which Biopython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Biopython, load one of these modules using a module load command like:

module load Biopython/1.83-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Biopython/1.83-foss-2023a x x x x x x x x x Biopython/1.81-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Bison/","title":"Bison","text":"

Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.

https://www.gnu.org/software/bison

"},{"location":"available_software/detail/Bison/#available-modules","title":"Available modules","text":"

The overview below shows which Bison installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bison, load one of these modules using a module load command like:

module load Bison/3.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bison/3.8.2-GCCcore-13.2.0 x x x x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Boost.MPI/","title":"Boost.MPI","text":"

Boost provides free peer-reviewed portable C++ source libraries.

https://www.boost.org/

"},{"location":"available_software/detail/Boost.MPI/#available-modules","title":"Available modules","text":"

The overview below shows which Boost.MPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost.MPI, load one of these modules using a module load command like:

module load Boost.MPI/1.83.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.MPI/1.83.0-gompi-2023b x x x x x x x x x Boost.MPI/1.82.0-gompi-2023a x x x x x x x x x Boost.MPI/1.81.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/Boost.Python/","title":"Boost.Python","text":"

Boost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.

https://boostorg.github.io/python

"},{"location":"available_software/detail/Boost.Python/#available-modules","title":"Available modules","text":"

The overview below shows which Boost.Python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost.Python, load one of these modules using a module load command like:

module load Boost.Python/1.83.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.Python/1.83.0-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/Boost/","title":"Boost","text":"

Boost provides free peer-reviewed portable C++ source libraries.

https://www.boost.org/

"},{"location":"available_software/detail/Boost/#available-modules","title":"Available modules","text":"

The overview below shows which Boost installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost, load one of these modules using a module load command like:

module load Boost/1.83.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost/1.83.0-GCC-13.2.0 x x x x x x x x x Boost/1.82.0-GCC-12.3.0 x x x x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bowtie2/","title":"Bowtie2","text":"

Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.

https://bowtie-bio.sourceforge.net/bowtie2/index.shtml

"},{"location":"available_software/detail/Bowtie2/#available-modules","title":"Available modules","text":"

The overview below shows which Bowtie2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bowtie2, load one of these modules using a module load command like:

module load Bowtie2/2.5.1-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bowtie2/2.5.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Brotli/","title":"Brotli","text":"

Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.The specification of the Brotli Compressed Data Format is defined in RFC 7932.

https://github.com/google/brotli

"},{"location":"available_software/detail/Brotli/#available-modules","title":"Available modules","text":"

The overview below shows which Brotli installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Brotli, load one of these modules using a module load command like:

module load Brotli/1.1.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brotli/1.1.0-GCCcore-13.2.0 x x x x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Brunsli/","title":"Brunsli","text":"

Brunsli is a lossless JPEG repacking library.

https://github.com/google/brunsli/

"},{"location":"available_software/detail/Brunsli/#available-modules","title":"Available modules","text":"

The overview below shows which Brunsli installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Brunsli, load one of these modules using a module load command like:

module load Brunsli/0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brunsli/0.1-GCCcore-13.2.0 x x x x x x x x x Brunsli/0.1-GCCcore-12.3.0 x x x x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CD-HIT/","title":"CD-HIT","text":"

CD-HIT is a very widely used program for clustering and comparing protein or nucleotide sequences.

http://weizhongli-lab.org/cd-hit/

"},{"location":"available_software/detail/CD-HIT/#available-modules","title":"Available modules","text":"

The overview below shows which CD-HIT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CD-HIT, load one of these modules using a module load command like:

module load CD-HIT/4.8.1-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CD-HIT/4.8.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CDO/","title":"CDO","text":"

CDO is a collection of command line Operators to manipulate and analyse Climate and NWP model Data.

https://code.zmaw.de/projects/cdo

"},{"location":"available_software/detail/CDO/#available-modules","title":"Available modules","text":"

The overview below shows which CDO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CDO, load one of these modules using a module load command like:

module load CDO/2.2.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CDO/2.2.2-gompi-2023b x x x x x x x x x CDO/2.2.2-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/CFITSIO/","title":"CFITSIO","text":"

CFITSIO is a library of C and Fortran subroutines for reading and writing data files inFITS (Flexible Image Transport System) data format.

https://heasarc.gsfc.nasa.gov/fitsio/

"},{"location":"available_software/detail/CFITSIO/#available-modules","title":"Available modules","text":"

The overview below shows which CFITSIO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CFITSIO, load one of these modules using a module load command like:

module load CFITSIO/4.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CFITSIO/4.3.1-GCCcore-13.2.0 x x x x x x x x x CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CGAL/","title":"CGAL","text":"

The goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.

https://www.cgal.org/

"},{"location":"available_software/detail/CGAL/#available-modules","title":"Available modules","text":"

The overview below shows which CGAL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CGAL, load one of these modules using a module load command like:

module load CGAL/5.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CGAL/5.6-GCCcore-12.3.0 x x x x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CMake/","title":"CMake","text":"

CMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.

https://www.cmake.org

"},{"location":"available_software/detail/CMake/#available-modules","title":"Available modules","text":"

The overview below shows which CMake installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CMake, load one of these modules using a module load command like:

module load CMake/3.27.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CMake/3.27.6-GCCcore-13.2.0 x x x x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CP2K/","title":"CP2K","text":"

CP2K is a freely available (GPL) program, written in Fortran 95, to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials.

https://www.cp2k.org/

"},{"location":"available_software/detail/CP2K/#available-modules","title":"Available modules","text":"

The overview below shows which CP2K installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CP2K, load one of these modules using a module load command like:

module load CP2K/2023.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CP2K/2023.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CUDA-Samples/","title":"CUDA-Samples","text":"

Samples for CUDA Developers which demonstrates features in CUDA Toolkit

https://github.com/NVIDIA/cuda-samples

"},{"location":"available_software/detail/CUDA-Samples/#available-modules","title":"Available modules","text":"

The overview below shows which CUDA-Samples installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CUDA-Samples, load one of these modules using a module load command like:

module load CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/CUDA/","title":"CUDA","text":"

CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

https://developer.nvidia.com/cuda-toolkit

"},{"location":"available_software/detail/CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CUDA, load one of these modules using a module load command like:

module load CUDA/12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA/12.1.1 x x x x x x - x x"},{"location":"available_software/detail/CapnProto/","title":"CapnProto","text":"

Cap\u2019n Proto is an insanely fast data interchange format and capability-based RPC system.

https://capnproto.org

"},{"location":"available_software/detail/CapnProto/#available-modules","title":"Available modules","text":"

The overview below shows which CapnProto installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CapnProto, load one of these modules using a module load command like:

module load CapnProto/1.0.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x x x x CapnProto/0.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Cartopy/","title":"Cartopy","text":"

Cartopy is a Python package designed to make drawing maps for data analysis and visualisation easy.

https://scitools.org.uk/cartopy/docs/latest/

"},{"location":"available_software/detail/Cartopy/#available-modules","title":"Available modules","text":"

The overview below shows which Cartopy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cartopy, load one of these modules using a module load command like:

module load Cartopy/0.22.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cartopy/0.22.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cartopy/#cartopy0220-foss-2023a","title":"Cartopy/0.22.0-foss-2023a","text":"

This is a list of extensions included in the module:

Cartopy-0.22.0, OWSLib-0.29.3, pyepsg-0.4.0, pykdtree-1.3.10, pyshp-2.3.1

"},{"location":"available_software/detail/Cassiopeia/","title":"Cassiopeia","text":"

A Package for Cas9-Enabled Single Cell Lineage Tracing Tree Reconstruction.

https://github.com/YosefLab/Cassiopeia

"},{"location":"available_software/detail/Cassiopeia/#available-modules","title":"Available modules","text":"

The overview below shows which Cassiopeia installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cassiopeia, load one of these modules using a module load command like:

module load Cassiopeia/2.0.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cassiopeia/2.0.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cassiopeia/#cassiopeia200-foss-2023a","title":"Cassiopeia/2.0.0-foss-2023a","text":"

This is a list of extensions included in the module:

bleach-6.1.0, Cassiopeia-2.0.0, comm-0.2.2, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.19.1, hits-0.4.0, ipywidgets-8.1.2, itolapi-4.1.4, jupyter_client-8.6.1, jupyter_core-5.7.2, jupyter_packaging-0.12.3, jupyterlab_pygments-0.3.0, jupyterlab_widgets-3.0.10, Levenshtein-0.22.0, mistune-3.0.2, nbclient-0.10.0, nbconvert-7.16.3, nbformat-5.10.3, ngs-tools-1.8.5, pandocfilters-1.5.1, python-Levenshtein-0.22.0, shortuuid-1.0.13, tinycss2-1.2.1, traitlets-5.14.2, widgetsnbextension-4.0.10

"},{"location":"available_software/detail/Catch2/","title":"Catch2","text":"

A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later

https://github.com/catchorg/Catch2

"},{"location":"available_software/detail/Catch2/#available-modules","title":"Available modules","text":"

The overview below shows which Catch2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Catch2, load one of these modules using a module load command like:

module load Catch2/2.13.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Catch2/2.13.9-GCCcore-13.2.0 x x x x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Cbc/","title":"Cbc","text":"

Cbc (Coin-or branch and cut) is an open-source mixed integer linear programmingsolver written in C++. It can be used as a callable library or using astand-alone executable.

https://github.com/coin-or/Cbc

"},{"location":"available_software/detail/Cbc/#available-modules","title":"Available modules","text":"

The overview below shows which Cbc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cbc, load one of these modules using a module load command like:

module load Cbc/2.10.11-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cbc/2.10.11-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cgl/","title":"Cgl","text":"

The COIN-OR Cut Generation Library (Cgl) is a collection of cut generators thatcan be used with other COIN-OR packages that make use of cuts, such as, amongothers, the linear solver Clp or the mixed integer linear programming solversCbc or BCP. Cgl uses the abstract class OsiSolverInterface (see Osi) to use orcommunicate with a solver. It does not directly call a solver.

https://github.com/coin-or/Cgl

"},{"location":"available_software/detail/Cgl/#available-modules","title":"Available modules","text":"

The overview below shows which Cgl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cgl, load one of these modules using a module load command like:

module load Cgl/0.60.8-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cgl/0.60.8-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Clp/","title":"Clp","text":"

Clp (Coin-or linear programming) is an open-source linear programming solver.It is primarily meant to be used as a callable library, but a basic,stand-alone executable version is also available.

https://github.com/coin-or/Clp

"},{"location":"available_software/detail/Clp/#available-modules","title":"Available modules","text":"

The overview below shows which Clp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Clp, load one of these modules using a module load command like:

module load Clp/1.17.9-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Clp/1.17.9-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CoinUtils/","title":"CoinUtils","text":"

CoinUtils (Coin-OR Utilities) is an open-source collection of classes andfunctions that are generally useful to more than one COIN-OR project.

https://github.com/coin-or/CoinUtils

"},{"location":"available_software/detail/CoinUtils/#available-modules","title":"Available modules","text":"

The overview below shows which CoinUtils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CoinUtils, load one of these modules using a module load command like:

module load CoinUtils/2.11.10-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CoinUtils/2.11.10-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Critic2/","title":"Critic2","text":"

Critic2 is a program for the analysis of quantum mechanicalcalculation results in molecules and periodic solids.

https://aoterodelaroza.github.io/critic2/

"},{"location":"available_software/detail/Critic2/#available-modules","title":"Available modules","text":"

The overview below shows which Critic2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Critic2, load one of these modules using a module load command like:

module load Critic2/1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Critic2/1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CubeLib/","title":"CubeLib","text":"

Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.

https://www.scalasca.org/software/cube-4.x/download.html

"},{"location":"available_software/detail/CubeLib/#available-modules","title":"Available modules","text":"

The overview below shows which CubeLib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CubeLib, load one of these modules using a module load command like:

module load CubeLib/4.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CubeLib/4.8.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/CubeWriter/","title":"CubeWriter","text":"

Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.

https://www.scalasca.org/software/cube-4.x/download.html

"},{"location":"available_software/detail/CubeWriter/#available-modules","title":"Available modules","text":"

The overview below shows which CubeWriter installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CubeWriter, load one of these modules using a module load command like:

module load CubeWriter/4.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CubeWriter/4.8.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/Cython/","title":"Cython","text":"

Cython is an optimising static compiler for both the Python programminglanguage and the extended Cython programming language (based on Pyrex).

https://cython.org/

"},{"location":"available_software/detail/Cython/#available-modules","title":"Available modules","text":"

The overview below shows which Cython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cython, load one of these modules using a module load command like:

module load Cython/3.0.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cython/3.0.10-GCCcore-13.2.0 x x x x x x x x x Cython/3.0.8-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DB/","title":"DB","text":"

Berkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.

https://www.oracle.com/technetwork/products/berkeleydb

"},{"location":"available_software/detail/DB/#available-modules","title":"Available modules","text":"

The overview below shows which DB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DB, load one of these modules using a module load command like:

module load DB/18.1.40-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB/18.1.40-GCCcore-12.3.0 x x x x x x x x x DB/18.1.40-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/DB_File/","title":"DB_File","text":"

Perl5 access to Berkeley DB version 1.x.

https://perldoc.perl.org/DB_File.html

"},{"location":"available_software/detail/DB_File/#available-modules","title":"Available modules","text":"

The overview below shows which DB_File installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DB_File, load one of these modules using a module load command like:

module load DB_File/1.859-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB_File/1.859-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DIAMOND/","title":"DIAMOND","text":"

Accelerated BLAST compatible local sequence aligner

https://github.com/bbuchfink/diamond

"},{"location":"available_software/detail/DIAMOND/#available-modules","title":"Available modules","text":"

The overview below shows which DIAMOND installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DIAMOND, load one of these modules using a module load command like:

module load DIAMOND/2.1.8-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DIAMOND/2.1.8-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DP3/","title":"DP3","text":"

DP3: streaming processing pipeline for radio interferometric data.

https://dp3.readthedocs.io/

"},{"location":"available_software/detail/DP3/#available-modules","title":"Available modules","text":"

The overview below shows which DP3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DP3, load one of these modules using a module load command like:

module load DP3/6.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DP3/6.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/DendroPy/","title":"DendroPy","text":"

A Python library for phylogenetics and phylogenetic computing: reading, writing, simulation, processing and manipulation of phylogenetic trees (phylogenies) and characters.

https://dendropy.org/

"},{"location":"available_software/detail/DendroPy/#available-modules","title":"Available modules","text":"

The overview below shows which DendroPy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DendroPy, load one of these modules using a module load command like:

module load DendroPy/4.6.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x x x x DendroPy/4.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Doxygen/","title":"Doxygen","text":"

Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.

https://www.doxygen.org

"},{"location":"available_software/detail/Doxygen/#available-modules","title":"Available modules","text":"

The overview below shows which Doxygen installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Doxygen, load one of these modules using a module load command like:

module load Doxygen/1.9.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Doxygen/1.9.8-GCCcore-13.2.0 x x x x x x x x x Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/EESSI-extend/","title":"EESSI-extend","text":"

The goal of the European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure. This module allows you to extend EESSI using the same configuration for EasyBuild as EESSI itself uses. A number of environment variables control the behaviour of the module: - EESSI_USER_INSTALL can be set to a location to install modules for use by the user only. The location must already exist on the filesystem. - EESSI_PROJECT_INSTALL can be set to a location to install modules for use by a project. The location must already exist on the filesystem and you should ensure that the location has the correct Linux group and the SGID permission is set on that directory (chmod g+s $EESSI_PROJECT_INSTALL) so that all members of the group have permission to read and write installations. - EESSI_SITE_INSTALL is either defined or not and cannot be used with another environment variable. A site installation is done in a defined location and any installations there are (by default) world readable. - EESSI_CVMFS_INSTALL is either defined or not and cannot be used with another environment variable. A CVMFS installation targets a defined location which will be ingested into CVMFS and is only useful for CVMFS administrators. - If none of the environment variables above are defined, an EESSI_USER_INSTALL is assumed with a value of $HOME/EESSI If both EESSI_USER_INSTALL and EESSI_PROJECT_INSTALL are defined, both sets of installations are exposed, but new installations are created as user installations.

https://eessi.io/docs/

"},{"location":"available_software/detail/EESSI-extend/#available-modules","title":"Available modules","text":"

The overview below shows which EESSI-extend installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EESSI-extend, load one of these modules using a module load command like:

module load EESSI-extend/2023.06-easybuild\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EESSI-extend/2023.06-easybuild x x x x x x x x x"},{"location":"available_software/detail/ELPA/","title":"ELPA","text":"

Eigenvalue SoLvers for Petaflop-Applications.

https://elpa.mpcdf.mpg.de/

"},{"location":"available_software/detail/ELPA/#available-modules","title":"Available modules","text":"

The overview below shows which ELPA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ELPA, load one of these modules using a module load command like:

module load ELPA/2023.05.001-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ELPA/2023.05.001-foss-2023a x x x x x x x x x ELPA/2022.05.001-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/ESPResSo/","title":"ESPResSo","text":"

A software package for performing and analyzing scientific Molecular Dynamics simulations.

https://espressomd.org/wordpress

"},{"location":"available_software/detail/ESPResSo/#available-modules","title":"Available modules","text":"

The overview below shows which ESPResSo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ESPResSo, load one of these modules using a module load command like:

module load ESPResSo/4.2.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ESPResSo/4.2.2-foss-2023b x x x x x x x x x ESPResSo/4.2.2-foss-2023a x x x x x x x x x ESPResSo/4.2.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/ETE/","title":"ETE","text":"

A Python framework for the analysis and visualization of trees

http://etetoolkit.org

"},{"location":"available_software/detail/ETE/#available-modules","title":"Available modules","text":"

The overview below shows which ETE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ETE, load one of these modules using a module load command like:

module load ETE/3.1.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ETE/3.1.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/EasyBuild/","title":"EasyBuild","text":"

EasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.

https://easybuilders.github.io/easybuild

"},{"location":"available_software/detail/EasyBuild/#available-modules","title":"Available modules","text":"

The overview below shows which EasyBuild installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EasyBuild, load one of these modules using a module load command like:

module load EasyBuild/4.9.4\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EasyBuild/4.9.4 x x x x x x x x x EasyBuild/4.9.3 x x x x x x x x x EasyBuild/4.9.2 x x x x x x x x x EasyBuild/4.9.1 x x x x x x x x x EasyBuild/4.9.0 x x x x x x x x x EasyBuild/4.8.2 x x x x x x x x x"},{"location":"available_software/detail/Eigen/","title":"Eigen","text":"

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

https://eigen.tuxfamily.org

"},{"location":"available_software/detail/Eigen/#available-modules","title":"Available modules","text":"

The overview below shows which Eigen installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Eigen, load one of these modules using a module load command like:

module load Eigen/3.4.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Eigen/3.4.0-GCCcore-13.2.0 x x x x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/EveryBeam/","title":"EveryBeam","text":"

Library that provides the antenna response pattern for several instruments,such as LOFAR (and LOBES), SKA (OSKAR), MWA, JVLA, etc.

https://everybeam.readthedocs.io/

"},{"location":"available_software/detail/EveryBeam/#available-modules","title":"Available modules","text":"

The overview below shows which EveryBeam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EveryBeam, load one of these modules using a module load command like:

module load EveryBeam/0.5.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EveryBeam/0.5.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/Extrae/","title":"Extrae","text":"

Extrae is the package devoted to generate Paraver trace-files for a post-mortem analysis.Extrae is a tool that uses different interposition mechanisms to inject probes into the target applicationso as to gather information regarding the application performance.

https://tools.bsc.es/extrae

"},{"location":"available_software/detail/Extrae/#available-modules","title":"Available modules","text":"

The overview below shows which Extrae installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Extrae, load one of these modules using a module load command like:

module load Extrae/4.2.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Extrae/4.2.0-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/FFTW.MPI/","title":"FFTW.MPI","text":"

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.

https://www.fftw.org

"},{"location":"available_software/detail/FFTW.MPI/#available-modules","title":"Available modules","text":"

The overview below shows which FFTW.MPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFTW.MPI, load one of these modules using a module load command like:

module load FFTW.MPI/3.3.10-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW.MPI/3.3.10-gompi-2023b x x x x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/FFTW/","title":"FFTW","text":"

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.

https://www.fftw.org

"},{"location":"available_software/detail/FFTW/#available-modules","title":"Available modules","text":"

The overview below shows which FFTW installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFTW, load one of these modules using a module load command like:

module load FFTW/3.3.10-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW/3.3.10-GCC-13.2.0 x x x x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FFmpeg/","title":"FFmpeg","text":"

A complete, cross-platform solution to record, convert and stream audio and video.

https://www.ffmpeg.org/

"},{"location":"available_software/detail/FFmpeg/#available-modules","title":"Available modules","text":"

The overview below shows which FFmpeg installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFmpeg, load one of these modules using a module load command like:

module load FFmpeg/6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFmpeg/6.0-GCCcore-13.2.0 x x x x x x x x x FFmpeg/6.0-GCCcore-12.3.0 x x x x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FLAC/","title":"FLAC","text":"

FLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaningthat audio is compressed in FLAC without any loss in quality.

https://xiph.org/flac/

"},{"location":"available_software/detail/FLAC/#available-modules","title":"Available modules","text":"

The overview below shows which FLAC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FLAC, load one of these modules using a module load command like:

module load FLAC/1.4.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FLAC/1.4.3-GCCcore-13.2.0 x x x x x x x x x FLAC/1.4.2-GCCcore-12.3.0 x x x x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FLTK/","title":"FLTK","text":"

FLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.

https://www.fltk.org

"},{"location":"available_software/detail/FLTK/#available-modules","title":"Available modules","text":"

The overview below shows which FLTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FLTK, load one of these modules using a module load command like:

module load FLTK/1.3.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FLTK/1.3.8-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FastME/","title":"FastME","text":"

FastME: a comprehensive, accurate and fast distance-based phylogeny inference program.

http://www.atgc-montpellier.fr/fastme/

"},{"location":"available_software/detail/FastME/#available-modules","title":"Available modules","text":"

The overview below shows which FastME installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FastME, load one of these modules using a module load command like:

module load FastME/2.1.6.3-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FastME/2.1.6.3-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Fiona/","title":"Fiona","text":"

Fiona is designed to be simple and dependable. It focuses on reading and writing datain standard Python IO style and relies upon familiar Python types and protocols such as files, dictionaries,mappings, and iterators instead of classes specific to OGR. Fiona can read and write real-world data usingmulti-layered GIS formats and zipped virtual file systems and integrates readily with other Python GISpackages such as pyproj, Rtree, and Shapely.

https://github.com/Toblerity/Fiona

"},{"location":"available_software/detail/Fiona/#available-modules","title":"Available modules","text":"

The overview below shows which Fiona installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Fiona, load one of these modules using a module load command like:

module load Fiona/1.9.5-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Fiona/1.9.5-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Fiona/#fiona195-foss-2023a","title":"Fiona/1.9.5-foss-2023a","text":"

This is a list of extensions included in the module:

click-plugins-1.1.1, cligj-0.7.2, fiona-1.9.5, munch-4.0.0

"},{"location":"available_software/detail/Flask/","title":"Flask","text":"

Flask is a lightweight WSGI web application framework. It is designed to makegetting started quick and easy, with the ability to scale up to complexapplications.This module includes the Flask extensions: Flask-Cors

https://www.palletsprojects.com/p/flask/

"},{"location":"available_software/detail/Flask/#available-modules","title":"Available modules","text":"

The overview below shows which Flask installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Flask, load one of these modules using a module load command like:

module load Flask/2.2.3-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Flask/2.2.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Flask/#flask223-gcccore-1220","title":"Flask/2.2.3-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

asgiref-3.6.0, cachelib-0.10.2, Flask-2.2.3, Flask-Cors-3.0.10, Flask-Session-0.4.0, itsdangerous-2.1.2, Werkzeug-2.2.3

"},{"location":"available_software/detail/FlexiBLAS/","title":"FlexiBLAS","text":"

FlexiBLAS is a wrapper library that enables the exchange of the BLAS and LAPACK implementationused by a program without recompiling or relinking it.

https://gitlab.mpi-magdeburg.mpg.de/software/flexiblas-release

"},{"location":"available_software/detail/FlexiBLAS/#available-modules","title":"Available modules","text":"

The overview below shows which FlexiBLAS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FlexiBLAS, load one of these modules using a module load command like:

module load FlexiBLAS/3.3.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FragGeneScan/","title":"FragGeneScan","text":"

FragGeneScan is an application for finding (fragmented) genes in short reads.

https://omics.informatics.indiana.edu/FragGeneScan/

"},{"location":"available_software/detail/FragGeneScan/#available-modules","title":"Available modules","text":"

The overview below shows which FragGeneScan installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FragGeneScan, load one of these modules using a module load command like:

module load FragGeneScan/1.31-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FragGeneScan/1.31-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FreeImage/","title":"FreeImage","text":"

FreeImage is an Open Source library project for developers who would like to support popular graphicsimage formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy touse, fast, multithreading safe.

http://freeimage.sourceforge.net

"},{"location":"available_software/detail/FreeImage/#available-modules","title":"Available modules","text":"

The overview below shows which FreeImage installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FreeImage, load one of these modules using a module load command like:

module load FreeImage/3.18.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FreeImage/3.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FriBidi/","title":"FriBidi","text":"

The Free Implementation of the Unicode Bidirectional Algorithm.

https://github.com/fribidi/fribidi

"},{"location":"available_software/detail/FriBidi/#available-modules","title":"Available modules","text":"

The overview below shows which FriBidi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FriBidi, load one of these modules using a module load command like:

module load FriBidi/1.0.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FriBidi/1.0.13-GCCcore-13.2.0 x x x x x x x x x FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GATK/","title":"GATK","text":"

The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyse next-generation resequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.

https://www.broadinstitute.org/gatk/

"},{"location":"available_software/detail/GATK/#available-modules","title":"Available modules","text":"

The overview below shows which GATK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GATK, load one of these modules using a module load command like:

module load GATK/4.5.0.0-GCCcore-12.3.0-Java-17\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GATK/4.5.0.0-GCCcore-12.3.0-Java-17 x x x x x x x x x"},{"location":"available_software/detail/GCC/","title":"GCC","text":"

The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).

https://gcc.gnu.org/

"},{"location":"available_software/detail/GCC/#available-modules","title":"Available modules","text":"

The overview below shows which GCC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GCC, load one of these modules using a module load command like:

module load GCC/13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCC/13.2.0 x x x x x x x x x GCC/12.3.0 x x x x x x x x x GCC/12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GCCcore/","title":"GCCcore","text":"

The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).

https://gcc.gnu.org/

"},{"location":"available_software/detail/GCCcore/#available-modules","title":"Available modules","text":"

The overview below shows which GCCcore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GCCcore, load one of these modules using a module load command like:

module load GCCcore/13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCCcore/13.2.0 x x x x x x x x x GCCcore/12.3.0 x x x x x x x x x GCCcore/12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GDAL/","title":"GDAL","text":"

GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.

https://www.gdal.org

"},{"location":"available_software/detail/GDAL/#available-modules","title":"Available modules","text":"

The overview below shows which GDAL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDAL, load one of these modules using a module load command like:

module load GDAL/3.9.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDAL/3.9.0-foss-2023b x x x x x x x x x GDAL/3.7.1-foss-2023a x x x x x x x x x GDAL/3.6.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/GDB/","title":"GDB","text":"

The GNU Project Debugger

https://www.gnu.org/software/gdb/gdb.html

"},{"location":"available_software/detail/GDB/#available-modules","title":"Available modules","text":"

The overview below shows which GDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDB, load one of these modules using a module load command like:

module load GDB/13.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDB/13.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/GDRCopy/","title":"GDRCopy","text":"

A low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.

https://github.com/NVIDIA/gdrcopy

"},{"location":"available_software/detail/GDRCopy/#available-modules","title":"Available modules","text":"

The overview below shows which GDRCopy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDRCopy, load one of these modules using a module load command like:

module load GDRCopy/2.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDRCopy/2.4-GCCcore-13.2.0 x x x x x x x x x GDRCopy/2.3.1-GCCcore-12.3.0 x x x x x x - x x"},{"location":"available_software/detail/GEOS/","title":"GEOS","text":"

GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)

https://trac.osgeo.org/geos

"},{"location":"available_software/detail/GEOS/#available-modules","title":"Available modules","text":"

The overview below shows which GEOS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GEOS, load one of these modules using a module load command like:

module load GEOS/3.12.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GEOS/3.12.1-GCC-13.2.0 x x x x x x x x x GEOS/3.12.0-GCC-12.3.0 x x x x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GL2PS/","title":"GL2PS","text":"

GL2PS: an OpenGL to PostScript printing library

https://www.geuz.org/gl2ps/

"},{"location":"available_software/detail/GL2PS/#available-modules","title":"Available modules","text":"

The overview below shows which GL2PS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GL2PS, load one of these modules using a module load command like:

module load GL2PS/1.4.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GL2PS/1.4.2-GCCcore-12.3.0 x x x x x x x x x GL2PS/1.4.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GLPK/","title":"GLPK","text":"

The GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.

https://www.gnu.org/software/glpk/

"},{"location":"available_software/detail/GLPK/#available-modules","title":"Available modules","text":"

The overview below shows which GLPK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GLPK, load one of these modules using a module load command like:

module load GLPK/5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLPK/5.0-GCCcore-13.2.0 x x x x x x x x x GLPK/5.0-GCCcore-12.3.0 x x x x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GLib/","title":"GLib","text":"

GLib is one of the base libraries of the GTK+ project

https://www.gtk.org/

"},{"location":"available_software/detail/GLib/#available-modules","title":"Available modules","text":"

The overview below shows which GLib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GLib, load one of these modules using a module load command like:

module load GLib/2.78.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLib/2.78.1-GCCcore-13.2.0 x x x x x x x x x GLib/2.77.1-GCCcore-12.3.0 x x x x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GMP/","title":"GMP","text":"

GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.

https://gmplib.org/

"},{"location":"available_software/detail/GMP/#available-modules","title":"Available modules","text":"

The overview below shows which GMP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GMP, load one of these modules using a module load command like:

module load GMP/6.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GMP/6.3.0-GCCcore-13.2.0 x x x x x x x x x GMP/6.2.1-GCCcore-12.3.0 x x x x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GObject-Introspection/","title":"GObject-Introspection","text":"

GObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.

https://gi.readthedocs.io/en/latest/

"},{"location":"available_software/detail/GObject-Introspection/#available-modules","title":"Available modules","text":"

The overview below shows which GObject-Introspection installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GObject-Introspection, load one of these modules using a module load command like:

module load GObject-Introspection/1.78.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GObject-Introspection/1.78.1-GCCcore-13.2.0 x x x x x x x x x GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GROMACS/","title":"GROMACS","text":"

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate theNewtonian equations of motion for systems with hundreds to millions ofparticles.This is a CPU only build, containing both MPI and threadMPI binariesfor both single and double precision.It also contains the gmxapi extension for the single precision MPI build.

https://www.gromacs.org

"},{"location":"available_software/detail/GROMACS/#available-modules","title":"Available modules","text":"

The overview below shows which GROMACS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GROMACS, load one of these modules using a module load command like:

module load GROMACS/2024.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GROMACS/2024.4-foss-2023b x x x x x x x x x GROMACS/2024.3-foss-2023b x x x x x x x x x GROMACS/2024.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/GROMACS/#gromacs20244-foss-2023b","title":"GROMACS/2024.4-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.4.2

"},{"location":"available_software/detail/GROMACS/#gromacs20243-foss-2023b","title":"GROMACS/2024.3-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.4.2

"},{"location":"available_software/detail/GROMACS/#gromacs20241-foss-2023b","title":"GROMACS/2024.1-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.5.0

"},{"location":"available_software/detail/GSL/","title":"GSL","text":"

The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.

https://www.gnu.org/software/gsl/

"},{"location":"available_software/detail/GSL/#available-modules","title":"Available modules","text":"

The overview below shows which GSL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GSL, load one of these modules using a module load command like:

module load GSL/2.7-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GSL/2.7-GCC-13.2.0 x x x x x x x x x GSL/2.7-GCC-12.3.0 x x x x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GST-plugins-base/","title":"GST-plugins-base","text":"

GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.

https://gstreamer.freedesktop.org/

"},{"location":"available_software/detail/GST-plugins-base/#available-modules","title":"Available modules","text":"

The overview below shows which GST-plugins-base installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GST-plugins-base, load one of these modules using a module load command like:

module load GST-plugins-base/1.24.8-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GST-plugins-base/1.24.8-GCC-13.2.0 x x x x x x x x x GST-plugins-base/1.22.5-GCC-12.3.0 x x x x x x x x x GST-plugins-base/1.22.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GStreamer/","title":"GStreamer","text":"

GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.

https://gstreamer.freedesktop.org/

"},{"location":"available_software/detail/GStreamer/#available-modules","title":"Available modules","text":"

The overview below shows which GStreamer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GStreamer, load one of these modules using a module load command like:

module load GStreamer/1.24.8-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GStreamer/1.24.8-GCC-13.2.0 x x x x x x x x x GStreamer/1.22.5-GCC-12.3.0 x x x x x x x x x GStreamer/1.22.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GTK3/","title":"GTK3","text":"

GTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.

https://developer.gnome.org/gtk3/stable/

"},{"location":"available_software/detail/GTK3/#available-modules","title":"Available modules","text":"

The overview below shows which GTK3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GTK3, load one of these modules using a module load command like:

module load GTK3/3.24.39-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GTK3/3.24.39-GCCcore-13.2.0 x x x x x x x x x GTK3/3.24.37-GCCcore-12.3.0 x x x x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Gdk-Pixbuf/","title":"Gdk-Pixbuf","text":"

The Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.

https://docs.gtk.org/gdk-pixbuf/

"},{"location":"available_software/detail/Gdk-Pixbuf/#available-modules","title":"Available modules","text":"

The overview below shows which Gdk-Pixbuf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Gdk-Pixbuf, load one of these modules using a module load command like:

module load Gdk-Pixbuf/2.42.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Gdk-Pixbuf/2.42.10-GCCcore-13.2.0 x x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GenomeTools/","title":"GenomeTools","text":"

A comprehensive software library for efficient processing of structured genome annotations.

http://genometools.org

"},{"location":"available_software/detail/GenomeTools/#available-modules","title":"Available modules","text":"

The overview below shows which GenomeTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GenomeTools, load one of these modules using a module load command like:

module load GenomeTools/1.6.2-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GenomeTools/1.6.2-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Ghostscript/","title":"Ghostscript","text":"

Ghostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.

https://ghostscript.com

"},{"location":"available_software/detail/Ghostscript/#available-modules","title":"Available modules","text":"

The overview below shows which Ghostscript installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ghostscript, load one of these modules using a module load command like:

module load Ghostscript/10.02.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ghostscript/10.02.1-GCCcore-13.2.0 x x x x x x x x x Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GitPython/","title":"GitPython","text":"

GitPython is a python library used to interact with Git repositories

https://gitpython.readthedocs.org

"},{"location":"available_software/detail/GitPython/#available-modules","title":"Available modules","text":"

The overview below shows which GitPython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GitPython, load one of these modules using a module load command like:

module load GitPython/3.1.40-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GitPython/3.1.40-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/GitPython/#gitpython3140-gcccore-1230","title":"GitPython/3.1.40-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

gitdb-4.0.11, GitPython-3.1.40, smmap-5.0.1

"},{"location":"available_software/detail/Graphene/","title":"Graphene","text":"

Graphene is a thin layer of types for graphic libraries

https://ebassi.github.io/graphene/

"},{"location":"available_software/detail/Graphene/#available-modules","title":"Available modules","text":"

The overview below shows which Graphene installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Graphene, load one of these modules using a module load command like:

module load Graphene/1.10.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Graphene/1.10.8-GCCcore-13.2.0 x x x x x x x x x Graphene/1.10.8-GCCcore-12.3.0 x x x x x x x x x Graphene/1.10.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HDBSCAN/","title":"HDBSCAN","text":"

The hdbscan library is a suite of tools to use unsupervised learning to find clusters, or dense regions, of a dataset. The primary algorithm is HDBSCAN* as proposed by Campello, Moulavi, and Sander. The library provides a high performance implementation of this algorithm, along with tools for analysing the resulting clustering.

http://hdbscan.readthedocs.io/en/latest/

"},{"location":"available_software/detail/HDBSCAN/#available-modules","title":"Available modules","text":"

The overview below shows which HDBSCAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDBSCAN, load one of these modules using a module load command like:

module load HDBSCAN/0.8.38.post1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDBSCAN/0.8.38.post1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/HDF/","title":"HDF","text":"

HDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.

https://www.hdfgroup.org/products/hdf4/

"},{"location":"available_software/detail/HDF/#available-modules","title":"Available modules","text":"

The overview below shows which HDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDF, load one of these modules using a module load command like:

module load HDF/4.2.16-2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF/4.2.16-2-GCCcore-13.2.0 x x x x x x x x x HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HDF5/","title":"HDF5","text":"

HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.

https://portal.hdfgroup.org/display/support

"},{"location":"available_software/detail/HDF5/#available-modules","title":"Available modules","text":"

The overview below shows which HDF5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDF5, load one of these modules using a module load command like:

module load HDF5/1.14.3-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF5/1.14.3-gompi-2023b x x x x x x x x x HDF5/1.14.0-gompi-2023a x x x x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/HMMER/","title":"HMMER","text":"

HMMER is used for searching sequence databases for homologs of protein sequences, and for making protein sequence alignments. It implements methods using probabilistic models called profile hidden Markov models (profile HMMs). Compared to BLAST, FASTA, and other sequence alignment and database search tools based on older scoring methodology, HMMER aims to be significantly more accurate and more able to detect remote homologs because of the strength of its underlying mathematical models. In the past, this strength came at significant computational expense, but in the new HMMER3 project, HMMER is now essentially as fast as BLAST.

http://hmmer.org/

"},{"location":"available_software/detail/HMMER/#available-modules","title":"Available modules","text":"

The overview below shows which HMMER installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HMMER, load one of these modules using a module load command like:

module load HMMER/3.4-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HMMER/3.4-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/HPL/","title":"HPL","text":"

HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.

https://www.netlib.org/benchmark/hpl/

"},{"location":"available_software/detail/HPL/#available-modules","title":"Available modules","text":"

The overview below shows which HPL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HPL, load one of these modules using a module load command like:

module load HPL/2.3-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HPL/2.3-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/HTSlib/","title":"HTSlib","text":"

A C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix

https://www.htslib.org/

"},{"location":"available_software/detail/HTSlib/#available-modules","title":"Available modules","text":"

The overview below shows which HTSlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HTSlib, load one of these modules using a module load command like:

module load HTSlib/1.19.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HTSlib/1.19.1-GCC-13.2.0 x x x x x x x x x HTSlib/1.18-GCC-12.3.0 x x x x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HarfBuzz/","title":"HarfBuzz","text":"

HarfBuzz is an OpenType text shaping engine.

https://www.freedesktop.org/wiki/Software/HarfBuzz

"},{"location":"available_software/detail/HarfBuzz/#available-modules","title":"Available modules","text":"

The overview below shows which HarfBuzz installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HarfBuzz, load one of these modules using a module load command like:

module load HarfBuzz/8.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HarfBuzz/8.2.2-GCCcore-13.2.0 x x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HepMC3/","title":"HepMC3","text":"

HepMC is a standard for storing Monte Carlo event data.

http://hepmc.web.cern.ch/hepmc/

"},{"location":"available_software/detail/HepMC3/#available-modules","title":"Available modules","text":"

The overview below shows which HepMC3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HepMC3, load one of these modules using a module load command like:

module load HepMC3/3.2.6-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HepMC3/3.2.6-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Highway/","title":"Highway","text":"

Highway is a C++ library for SIMD (Single Instruction, Multiple Data), i.e. applying the sameoperation to 'lanes'.

https://github.com/google/highway

"},{"location":"available_software/detail/Highway/#available-modules","title":"Available modules","text":"

The overview below shows which Highway installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Highway, load one of these modules using a module load command like:

module load Highway/1.0.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Highway/1.0.4-GCCcore-12.3.0 x x x x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Hypre/","title":"Hypre","text":"

Hypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.

https://computation.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods

"},{"location":"available_software/detail/Hypre/#available-modules","title":"Available modules","text":"

The overview below shows which Hypre installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Hypre, load one of these modules using a module load command like:

module load Hypre/2.29.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Hypre/2.29.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/ICU/","title":"ICU","text":"

ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.

https://icu.unicode.org

"},{"location":"available_software/detail/ICU/#available-modules","title":"Available modules","text":"

The overview below shows which ICU installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ICU, load one of these modules using a module load command like:

module load ICU/74.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ICU/74.1-GCCcore-13.2.0 x x x x x x x x x ICU/73.2-GCCcore-12.3.0 x x x x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/IDG/","title":"IDG","text":"

Image Domain Gridding (IDG) is a fast method for convolutional resampling (gridding/degridding)of radio astronomical data (visibilities). Direction dependent effects (DDEs) or A-tems can be appliedin the gridding process.The algorithm is described in \"Image Domain Gridding: a fast method for convolutional resampling of visibilities\",Van der Tol (2018).The implementation is described in \"Radio-astronomical imaging on graphics processors\", Veenboer (2020).Please cite these papers in publications using IDG.

https://idg.readthedocs.io/

"},{"location":"available_software/detail/IDG/#available-modules","title":"Available modules","text":"

The overview below shows which IDG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IDG, load one of these modules using a module load command like:

module load IDG/1.2.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IDG/1.2.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/IPython/","title":"IPython","text":"

IPython provides a rich architecture for interactive computing with: Powerful interactive shells (terminal and Qt-based). A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into your own projects. Easy to use, high performance tools for parallel computing.

https://ipython.org/index.html

"},{"location":"available_software/detail/IPython/#available-modules","title":"Available modules","text":"

The overview below shows which IPython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IPython, load one of these modules using a module load command like:

module load IPython/8.17.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IPython/8.17.2-GCCcore-13.2.0 x x x x x x x x x IPython/8.14.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/IPython/#ipython8172-gcccore-1320","title":"IPython/8.17.2-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

asttokens-2.4.1, backcall-0.2.0, executing-2.0.1, ipython-8.17.2, matplotlib-inline-0.1.6, pickleshare-0.7.5, prompt_toolkit-3.0.41, pure_eval-0.2.2, stack_data-0.6.3, traitlets-5.13.0

"},{"location":"available_software/detail/IPython/#ipython8140-gcccore-1230","title":"IPython/8.14.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

asttokens-2.2.1, backcall-0.2.0, executing-1.2.0, ipython-8.14.0, jedi-0.19.0, matplotlib-inline-0.1.6, parso-0.8.3, pickleshare-0.7.5, prompt_toolkit-3.0.39, pure_eval-0.2.2, stack_data-0.6.2, traitlets-5.9.0

"},{"location":"available_software/detail/IQ-TREE/","title":"IQ-TREE","text":"

Efficient phylogenomic software by maximum likelihood

http://www.iqtree.org/

"},{"location":"available_software/detail/IQ-TREE/#available-modules","title":"Available modules","text":"

The overview below shows which IQ-TREE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IQ-TREE, load one of these modules using a module load command like:

module load IQ-TREE/2.3.5-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IQ-TREE/2.3.5-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/ISA-L/","title":"ISA-L","text":"

Intelligent Storage Acceleration Library

https://github.com/intel/isa-l

"},{"location":"available_software/detail/ISA-L/#available-modules","title":"Available modules","text":"

The overview below shows which ISA-L installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ISA-L, load one of these modules using a module load command like:

module load ISA-L/2.30.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ISA-L/2.30.0-GCCcore-12.3.0 x x x x x x x x x ISA-L/2.30.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ISL/","title":"ISL","text":"

isl is a library for manipulating sets and relations of integer points bounded by linear constraints.

https://libisl.sourceforge.io

"},{"location":"available_software/detail/ISL/#available-modules","title":"Available modules","text":"

The overview below shows which ISL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ISL, load one of these modules using a module load command like:

module load ISL/0.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ISL/0.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ITSTool/","title":"ITSTool","text":"

ITS Tool allows you to translate your XML documents with PO files

http://itstool.org/

"},{"location":"available_software/detail/ITSTool/#available-modules","title":"Available modules","text":"

The overview below shows which ITSTool installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ITSTool, load one of these modules using a module load command like:

module load ITSTool/2.0.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ITSTool/2.0.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ImageMagick/","title":"ImageMagick","text":"

ImageMagick is a software suite to create, edit, compose, or convert bitmap images

https://www.imagemagick.org/

"},{"location":"available_software/detail/ImageMagick/#available-modules","title":"Available modules","text":"

The overview below shows which ImageMagick installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ImageMagick, load one of these modules using a module load command like:

module load ImageMagick/7.1.1-34-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ImageMagick/7.1.1-34-GCCcore-13.2.0 x x x x x x x x x ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Imath/","title":"Imath","text":"

Imath is a C++ and python library of 2D and 3D vector, matrix, and math operations for computer graphics

https://imath.readthedocs.io/en/latest/

"},{"location":"available_software/detail/Imath/#available-modules","title":"Available modules","text":"

The overview below shows which Imath installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Imath, load one of these modules using a module load command like:

module load Imath/3.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Imath/3.1.9-GCCcore-13.2.0 x x x x x x x x x Imath/3.1.7-GCCcore-12.3.0 x x x x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/JasPer/","title":"JasPer","text":"

The JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.

https://www.ece.uvic.ca/~frodo/jasper/

"},{"location":"available_software/detail/JasPer/#available-modules","title":"Available modules","text":"

The overview below shows which JasPer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JasPer, load one of these modules using a module load command like:

module load JasPer/4.0.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JasPer/4.0.0-GCCcore-13.2.0 x x x x x x x x x JasPer/4.0.0-GCCcore-12.3.0 x x x x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Java/","title":"Java","text":""},{"location":"available_software/detail/Java/#available-modules","title":"Available modules","text":"

The overview below shows which Java installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Java, load one of these modules using a module load command like:

module load Java/17.0.6\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Java/17.0.6 x x x x x x x x x Java/17(@Java/17.0.6) x x x x x x x x x Java/11.0.20 x x x x x x x x x Java/11(@Java/11.0.20) x x x x x x x x x"},{"location":"available_software/detail/JsonCpp/","title":"JsonCpp","text":"

JsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.

https://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html

"},{"location":"available_software/detail/JsonCpp/#available-modules","title":"Available modules","text":"

The overview below shows which JsonCpp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JsonCpp, load one of these modules using a module load command like:

module load JsonCpp/1.9.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Judy/","title":"Judy","text":"

A C library that implements a dynamic array.

http://judy.sourceforge.net/

"},{"location":"available_software/detail/Judy/#available-modules","title":"Available modules","text":"

The overview below shows which Judy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Judy, load one of these modules using a module load command like:

module load Judy/1.0.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Judy/1.0.5-GCCcore-12.3.0 x x x x x x x x x Judy/1.0.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/JupyterLab/","title":"JupyterLab","text":"

JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook.

https://jupyter.org/

"},{"location":"available_software/detail/JupyterLab/#available-modules","title":"Available modules","text":"

The overview below shows which JupyterLab installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JupyterLab, load one of these modules using a module load command like:

module load JupyterLab/4.0.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/JupyterLab/#jupyterlab405-gcccore-1230","title":"JupyterLab/4.0.5-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

async-lru-2.0.4, json5-0.9.14, jupyter-lsp-2.2.0, jupyterlab-4.0.5, jupyterlab_server-2.24.0

"},{"location":"available_software/detail/JupyterNotebook/","title":"JupyterNotebook","text":"

The Jupyter Notebook is the original web application for creating and sharing computational documents. It offers a simple, streamlined, document-centric experience.

https://jupyter.org/

"},{"location":"available_software/detail/JupyterNotebook/#available-modules","title":"Available modules","text":"

The overview below shows which JupyterNotebook installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JupyterNotebook, load one of these modules using a module load command like:

module load JupyterNotebook/7.0.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/KaHIP/","title":"KaHIP","text":"

The graph partitioning framework KaHIP -- Karlsruhe High Quality Partitioning.

https://kahip.github.io/

"},{"location":"available_software/detail/KaHIP/#available-modules","title":"Available modules","text":"

The overview below shows which KaHIP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using KaHIP, load one of these modules using a module load command like:

module load KaHIP/3.16-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 KaHIP/3.16-gompi-2023a x x x x x x x x x KaHIP/3.14-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/KronaTools/","title":"KronaTools","text":"

Krona Tools is a set of scripts to create Krona charts from several Bioinformatics tools as well as from text and XML files.

https://github.com/marbl/Krona/wiki/KronaTools

"},{"location":"available_software/detail/KronaTools/#available-modules","title":"Available modules","text":"

The overview below shows which KronaTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using KronaTools, load one of these modules using a module load command like:

module load KronaTools/2.8.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 KronaTools/2.8.1-GCCcore-12.3.0 x x x x x x x x x KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LAME/","title":"LAME","text":"

LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.

http://lame.sourceforge.net/

"},{"location":"available_software/detail/LAME/#available-modules","title":"Available modules","text":"

The overview below shows which LAME installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LAME, load one of these modules using a module load command like:

module load LAME/3.100-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAME/3.100-GCCcore-13.2.0 x x x x x x x x x LAME/3.100-GCCcore-12.3.0 x x x x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LAMMPS/","title":"LAMMPS","text":"

LAMMPS is a classical molecular dynamics code, and an acronymfor Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS haspotentials for solid-state materials (metals, semiconductors) and soft matter(biomolecules, polymers) and coarse-grained or mesoscopic systems. It can beused to model atoms or, more generically, as a parallel particle simulator atthe atomic, meso, or continuum scale. LAMMPS runs on single processors or inparallel using message-passing techniques and a spatial-decomposition of thesimulation domain. The code is designed to be easy to modify or extend with newfunctionality.

https://www.lammps.org

"},{"location":"available_software/detail/LAMMPS/#available-modules","title":"Available modules","text":"

The overview below shows which LAMMPS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LAMMPS, load one of these modules using a module load command like:

module load LAMMPS/29Aug2024-foss-2023b-kokkos\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAMMPS/29Aug2024-foss-2023b-kokkos x x x x x x x x x LAMMPS/2Aug2023_update2-foss-2023a-kokkos x x x x x x x x x"},{"location":"available_software/detail/LERC/","title":"LERC","text":"

LERC is an open-source image or raster format which supports rapid encoding and decodingfor any pixel type (not just RGB or Byte). Users set the maximum compression error per pixel while encoding,so the precision of the original input image is preserved (within user defined error bounds).

https://github.com/Esri/lerc

"},{"location":"available_software/detail/LERC/#available-modules","title":"Available modules","text":"

The overview below shows which LERC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LERC, load one of these modules using a module load command like:

module load LERC/4.0.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LERC/4.0.0-GCCcore-13.2.0 x x x x x x x x x LERC/4.0.0-GCCcore-12.3.0 x x x x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LHAPDF/","title":"LHAPDF","text":"

Les Houches Parton Density FunctionLHAPDF is the standard tool for evaluating parton distribution functions (PDFs) in high-energy physics.

http://lhapdf.hepforge.org/

"},{"location":"available_software/detail/LHAPDF/#available-modules","title":"Available modules","text":"

The overview below shows which LHAPDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LHAPDF, load one of these modules using a module load command like:

module load LHAPDF/6.5.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LHAPDF/6.5.4-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LLVM/","title":"LLVM","text":"

The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation (\"LLVM IR\"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.

https://llvm.org/

"},{"location":"available_software/detail/LLVM/#available-modules","title":"Available modules","text":"

The overview below shows which LLVM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LLVM, load one of these modules using a module load command like:

module load LLVM/16.0.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LLVM/16.0.6-GCCcore-13.2.0 x x x x x x x x x LLVM/16.0.6-GCCcore-12.3.0 x x x x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x - x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x x x x"},{"location":"available_software/detail/LMDB/","title":"LMDB","text":"

LMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.

https://symas.com/lmdb

"},{"location":"available_software/detail/LMDB/#available-modules","title":"Available modules","text":"

The overview below shows which LMDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LMDB, load one of these modules using a module load command like:

module load LMDB/0.9.31-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LMDB/0.9.31-GCCcore-12.3.0 x x x x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LRBinner/","title":"LRBinner","text":"

LRBinner is a long-read binning tool published in WABI 2021 proceedings and AMB.

https://github.com/anuradhawick/LRBinner

"},{"location":"available_software/detail/LRBinner/#available-modules","title":"Available modules","text":"

The overview below shows which LRBinner installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LRBinner, load one of these modules using a module load command like:

module load LRBinner/0.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LRBinner/0.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/LRBinner/#lrbinner01-foss-2023a","title":"LRBinner/0.1-foss-2023a","text":"

This is a list of extensions included in the module:

LRBinner-0.1, tabulate-0.9.0

"},{"location":"available_software/detail/LSD2/","title":"LSD2","text":"

Least-squares methods to estimate rates and dates from phylogenies

https://github.com/tothuhien/lsd2

"},{"location":"available_software/detail/LSD2/#available-modules","title":"Available modules","text":"

The overview below shows which LSD2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LSD2, load one of these modules using a module load command like:

module load LSD2/2.4.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LSD2/2.4.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LZO/","title":"LZO","text":"

Portable lossless data compression library

https://www.oberhumer.com/opensource/lzo/

"},{"location":"available_software/detail/LZO/#available-modules","title":"Available modules","text":"

The overview below shows which LZO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LZO, load one of these modules using a module load command like:

module load LZO/2.10-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LZO/2.10-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LibTIFF/","title":"LibTIFF","text":"

tiff: Library and tools for reading and writing TIFF data files

https://libtiff.gitlab.io/libtiff/

"},{"location":"available_software/detail/LibTIFF/#available-modules","title":"Available modules","text":"

The overview below shows which LibTIFF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LibTIFF, load one of these modules using a module load command like:

module load LibTIFF/4.6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Libint/","title":"Libint","text":"

Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory.

https://github.com/evaleev/libint

"},{"location":"available_software/detail/Libint/#available-modules","title":"Available modules","text":"

The overview below shows which Libint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Libint, load one of these modules using a module load command like:

module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x x x x"},{"location":"available_software/detail/LightGBM/","title":"LightGBM","text":"

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBMor MART) framework based on decision tree algorithms, used for ranking,classification and many other machine learning tasks.

https://lightgbm.readthedocs.io

"},{"location":"available_software/detail/LightGBM/#available-modules","title":"Available modules","text":"

The overview below shows which LightGBM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LightGBM, load one of these modules using a module load command like:

module load LightGBM/4.5.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LightGBM/4.5.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/LightGBM/#lightgbm450-foss-2023a","title":"LightGBM/4.5.0-foss-2023a","text":"

This is a list of extensions included in the module:

lightgbm-4.5.0

"},{"location":"available_software/detail/LittleCMS/","title":"LittleCMS","text":"

Little CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.

https://www.littlecms.com/

"},{"location":"available_software/detail/LittleCMS/#available-modules","title":"Available modules","text":"

The overview below shows which LittleCMS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LittleCMS, load one of these modules using a module load command like:

module load LittleCMS/2.15-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LittleCMS/2.15-GCCcore-13.2.0 x x x x x x x x x LittleCMS/2.15-GCCcore-12.3.0 x x x x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LoopTools/","title":"LoopTools","text":"

LoopTools is a package for evaluation of scalar and tensor one-loop integrals.It is based on the FF package by G.J. van Oldenborgh.

https://feynarts.de/looptools/

"},{"location":"available_software/detail/LoopTools/#available-modules","title":"Available modules","text":"

The overview below shows which LoopTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LoopTools, load one of these modules using a module load command like:

module load LoopTools/2.15-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LoopTools/2.15-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Lua/","title":"Lua","text":"

Lua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.

https://www.lua.org/

"},{"location":"available_software/detail/Lua/#available-modules","title":"Available modules","text":"

The overview below shows which Lua installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Lua, load one of these modules using a module load command like:

module load Lua/5.4.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Lua/5.4.6-GCCcore-13.2.0 x x x x x x x x x Lua/5.4.6-GCCcore-12.3.0 x x x x x x x x x Lua/5.4.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MAFFT/","title":"MAFFT","text":"

MAFFT is a multiple sequence alignment program for unix-like operating systems.It offers a range of multiple alignment methods, L-INS-i (accurate; for alignmentof <\u223c200 sequences), FFT-NS-2 (fast; for alignment of <\u223c30,000 sequences), etc.

https://mafft.cbrc.jp/alignment/software/source.html

"},{"location":"available_software/detail/MAFFT/#available-modules","title":"Available modules","text":"

The overview below shows which MAFFT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MAFFT, load one of these modules using a module load command like:

module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x - x x"},{"location":"available_software/detail/MBX/","title":"MBX","text":"

MBX is an energy and force calculator for data-driven many-body simulations

https://github.com/paesanilab/MBX

"},{"location":"available_software/detail/MBX/#available-modules","title":"Available modules","text":"

The overview below shows which MBX installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MBX, load one of these modules using a module load command like:

module load MBX/1.1.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MBX/1.1.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MCL/","title":"MCL","text":"

The MCL algorithm is short for the Markov Cluster Algorithm, a fastand scalable unsupervised cluster algorithm for graphs (also known as networks) basedon simulation of (stochastic) flow in graphs.

https://micans.org/mcl/

"},{"location":"available_software/detail/MCL/#available-modules","title":"Available modules","text":"

The overview below shows which MCL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MCL, load one of these modules using a module load command like:

module load MCL/22.282-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MCL/22.282-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/MDAnalysis/","title":"MDAnalysis","text":"

MDAnalysis is an object-oriented Python library to analyze trajectories from molecular dynamics (MD)simulations in many popular formats.

https://www.mdanalysis.org/

"},{"location":"available_software/detail/MDAnalysis/#available-modules","title":"Available modules","text":"

The overview below shows which MDAnalysis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MDAnalysis, load one of these modules using a module load command like:

module load MDAnalysis/2.4.2-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDAnalysis/2.4.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/MDAnalysis/#mdanalysis242-foss-2022b","title":"MDAnalysis/2.4.2-foss-2022b","text":"

This is a list of extensions included in the module:

fasteners-0.18, funcsigs-1.0.2, GridDataFormats-1.0.1, gsd-2.8.0, MDAnalysis-2.4.2, mmtf-python-1.1.3, mrcfile-1.4.3, msgpack-1.0.5

"},{"location":"available_software/detail/MDI/","title":"MDI","text":"

The MolSSI Driver Interface (MDI) project provides a standardized API for fast, on-the-fly communication between computational chemistry codes. This greatly simplifies the process of implementing methods that require the cooperation of multiple software packages and enables developers to write a single implementation that works across many different codes. The API is sufficiently general to support a wide variety of techniques, including QM/MM, ab initio MD, machine learning, advanced sampling, and path integral MD, while also being straightforwardly extensible. Communication between codes is handled by the MDI Library, which enables tight coupling between codes using either the MPI or TCP/IP methods.

https://github.com/MolSSI-MDI/MDI_Library

"},{"location":"available_software/detail/MDI/#available-modules","title":"Available modules","text":"

The overview below shows which MDI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MDI, load one of these modules using a module load command like:

module load MDI/1.4.29-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDI/1.4.29-gompi-2023b x x x x x x x x x MDI/1.4.26-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/METIS/","title":"METIS","text":"

METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.

http://glaros.dtc.umn.edu/gkhome/metis/metis/overview

"},{"location":"available_software/detail/METIS/#available-modules","title":"Available modules","text":"

The overview below shows which METIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using METIS, load one of these modules using a module load command like:

module load METIS/5.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 METIS/5.1.0-GCCcore-12.3.0 x x x x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MMseqs2/","title":"MMseqs2","text":"

MMseqs2: ultra fast and sensitive search and clustering suite

https://mmseqs.com

"},{"location":"available_software/detail/MMseqs2/#available-modules","title":"Available modules","text":"

The overview below shows which MMseqs2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MMseqs2, load one of these modules using a module load command like:

module load MMseqs2/14-7e284-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MMseqs2/14-7e284-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/MODFLOW/","title":"MODFLOW","text":"

MODFLOW is the USGS's modular hydrologic model. MODFLOW is considered an international standard for simulating and predicting groundwater conditions and groundwater/surface-water interactions.

https://www.usgs.gov/mission-areas/water-resources/science/modflow-and-related-programs

"},{"location":"available_software/detail/MODFLOW/#available-modules","title":"Available modules","text":"

The overview below shows which MODFLOW installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MODFLOW, load one of these modules using a module load command like:

module load MODFLOW/6.4.4-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MODFLOW/6.4.4-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MPC/","title":"MPC","text":"

Gnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.

http://www.multiprecision.org/

"},{"location":"available_software/detail/MPC/#available-modules","title":"Available modules","text":"

The overview below shows which MPC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MPC, load one of these modules using a module load command like:

module load MPC/1.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPC/1.3.1-GCCcore-13.2.0 x x x x x x x x x MPC/1.3.1-GCCcore-12.3.0 x x x x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MPFR/","title":"MPFR","text":"

The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.

https://www.mpfr.org

"},{"location":"available_software/detail/MPFR/#available-modules","title":"Available modules","text":"

The overview below shows which MPFR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MPFR, load one of these modules using a module load command like:

module load MPFR/4.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPFR/4.2.1-GCCcore-13.2.0 x x x x x x x x x MPFR/4.2.0-GCCcore-12.3.0 x x x x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MUMPS/","title":"MUMPS","text":"

A parallel sparse direct solver

https://graal.ens-lyon.fr/MUMPS/

"},{"location":"available_software/detail/MUMPS/#available-modules","title":"Available modules","text":"

The overview below shows which MUMPS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MUMPS, load one of these modules using a module load command like:

module load MUMPS/5.6.1-foss-2023a-metis\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MUMPS/5.6.1-foss-2023a-metis x x x x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x - x x"},{"location":"available_software/detail/Mako/","title":"Mako","text":"

A super-fast templating language that borrows the best ideas from the existing templating languages

https://www.makotemplates.org

"},{"location":"available_software/detail/Mako/#available-modules","title":"Available modules","text":"

The overview below shows which Mako installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mako, load one of these modules using a module load command like:

module load Mako/1.2.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mako/1.2.4-GCCcore-13.2.0 x x x x x x x x x Mako/1.2.4-GCCcore-12.3.0 x x x x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Mako/#mako124-gcccore-1320","title":"Mako/1.2.4-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

Mako-1.2.4, MarkupSafe-2.1.3

"},{"location":"available_software/detail/Mako/#mako124-gcccore-1230","title":"Mako/1.2.4-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Mako-1.2.4, MarkupSafe-2.1.3

"},{"location":"available_software/detail/MariaDB/","title":"MariaDB","text":"

MariaDB is an enhanced, drop-in replacement for MySQL.Included engines: myISAM, Aria, InnoDB, RocksDB, TokuDB, OQGraph, Mroonga.

https://mariadb.org/

"},{"location":"available_software/detail/MariaDB/#available-modules","title":"Available modules","text":"

The overview below shows which MariaDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MariaDB, load one of these modules using a module load command like:

module load MariaDB/11.6.0-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MariaDB/11.6.0-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Mash/","title":"Mash","text":"

Fast genome and metagenome distance estimation using MinHash

http://mash.readthedocs.org

"},{"location":"available_software/detail/Mash/#available-modules","title":"Available modules","text":"

The overview below shows which Mash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mash, load one of these modules using a module load command like:

module load Mash/2.3-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mash/2.3-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Mesa/","title":"Mesa","text":"

Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.

https://www.mesa3d.org/

"},{"location":"available_software/detail/Mesa/#available-modules","title":"Available modules","text":"

The overview below shows which Mesa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mesa, load one of these modules using a module load command like:

module load Mesa/23.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mesa/23.1.9-GCCcore-13.2.0 x x x x x x x x x Mesa/23.1.4-GCCcore-12.3.0 x x x x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Meson/","title":"Meson","text":"

Meson is a cross-platform build system designed to be both as fast and as user friendly as possible.

https://mesonbuild.com

"},{"location":"available_software/detail/Meson/#available-modules","title":"Available modules","text":"

The overview below shows which Meson installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Meson, load one of these modules using a module load command like:

module load Meson/1.3.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Meson/1.3.1-GCCcore-12.3.0 x x x x x x x x x Meson/1.2.3-GCCcore-13.2.0 x x x x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MetaEuk/","title":"MetaEuk","text":"

MetaEuk is a modular toolkit designed for large-scale gene discovery and annotation in eukaryotic metagenomic contigs.

https://metaeuk.soedinglab.org

"},{"location":"available_software/detail/MetaEuk/#available-modules","title":"Available modules","text":"

The overview below shows which MetaEuk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MetaEuk, load one of these modules using a module load command like:

module load MetaEuk/6-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MetaEuk/6-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MetalWalls/","title":"MetalWalls","text":"

MetalWalls (MW) is a molecular dynamics code dedicated to the modelling of electrochemical systems.Its main originality is the inclusion of a series of methods allowing to apply a constant potential within theelectrode materials.

https://gitlab.com/ampere2/metalwalls

"},{"location":"available_software/detail/MetalWalls/#available-modules","title":"Available modules","text":"

The overview below shows which MetalWalls installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MetalWalls, load one of these modules using a module load command like:

module load MetalWalls/21.06.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MetalWalls/21.06.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MultiQC/","title":"MultiQC","text":"

Aggregate results from bioinformatics analyses across many samples into a single report. MultiQC searches a given directory for analysis logs and compiles an HTML report. It's a general use tool, perfect for summarising the output from numerous bioinformatics tools.

https://multiqc.info

"},{"location":"available_software/detail/MultiQC/#available-modules","title":"Available modules","text":"

The overview below shows which MultiQC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MultiQC, load one of these modules using a module load command like:

module load MultiQC/1.14-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MultiQC/1.14-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/MultiQC/#multiqc114-foss-2022b","title":"MultiQC/1.14-foss-2022b","text":"

This is a list of extensions included in the module:

coloredlogs-15.0.1, colormath-3.0.0, commonmark-0.9.1, humanfriendly-10.0, lzstring-1.0.4, Markdown-3.4.1, markdown-it-py-2.1.0, mdurl-0.1.2, multiqc-1.14, Pygments-2.14.0, rich-13.3.1, rich-click-1.6.1, spectra-0.0.11

"},{"location":"available_software/detail/Mustache/","title":"Mustache","text":"

Mustache (Multi-scale Detection of Chromatin Loops from Hi-C and Micro-C Maps usingScale-Space Representation) is a tool for multi-scale detection of chromatin loops from Hi-C and Micro-Ccontact maps in high resolutions (10kbp all the way to 500bp and even more).Mustache uses recent technical advances in scale-space theory inComputer Vision to detect chromatin loops caused by interaction of DNA segments with a variable size.

https://github.com/ay-lab/mustache

"},{"location":"available_software/detail/Mustache/#available-modules","title":"Available modules","text":"

The overview below shows which Mustache installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mustache, load one of these modules using a module load command like:

module load Mustache/1.3.3-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mustache/1.3.3-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/NASM/","title":"NASM","text":"

NASM: General-purpose x86 assembler

https://www.nasm.us/

"},{"location":"available_software/detail/NASM/#available-modules","title":"Available modules","text":"

The overview below shows which NASM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NASM, load one of these modules using a module load command like:

module load NASM/2.16.01-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NASM/2.16.01-GCCcore-13.2.0 x x x x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NCCL/","title":"NCCL","text":"

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collectivecommunication primitives that are performance optimized for NVIDIA GPUs.

https://developer.nvidia.com/nccl

"},{"location":"available_software/detail/NCCL/#available-modules","title":"Available modules","text":"

The overview below shows which NCCL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NCCL, load one of these modules using a module load command like:

module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/NLTK/","title":"NLTK","text":"

NLTK is a leading platform for building Python programs to work with human language data.

https://www.nltk.org/

"},{"location":"available_software/detail/NLTK/#available-modules","title":"Available modules","text":"

The overview below shows which NLTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NLTK, load one of these modules using a module load command like:

module load NLTK/3.8.1-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NLTK/3.8.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/NLTK/#nltk381-foss-2023b","title":"NLTK/3.8.1-foss-2023b","text":"

This is a list of extensions included in the module:

NLTK-3.8.1, python-crfsuite-0.9.10, regex-2023.12.25

"},{"location":"available_software/detail/NLopt/","title":"NLopt","text":"

NLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.

http://ab-initio.mit.edu/wiki/index.php/NLopt

"},{"location":"available_software/detail/NLopt/#available-modules","title":"Available modules","text":"

The overview below shows which NLopt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NLopt, load one of these modules using a module load command like:

module load NLopt/2.7.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NLopt/2.7.1-GCCcore-13.2.0 x x x x x x x x x NLopt/2.7.1-GCCcore-12.3.0 x x x x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NSPR/","title":"NSPR","text":"

Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.

https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR

"},{"location":"available_software/detail/NSPR/#available-modules","title":"Available modules","text":"

The overview below shows which NSPR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NSPR, load one of these modules using a module load command like:

module load NSPR/4.35-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSPR/4.35-GCCcore-13.2.0 x x x x x x x x x NSPR/4.35-GCCcore-12.3.0 x x x x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NSS/","title":"NSS","text":"

Network Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.

https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS

"},{"location":"available_software/detail/NSS/#available-modules","title":"Available modules","text":"

The overview below shows which NSS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NSS, load one of these modules using a module load command like:

module load NSS/3.94-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSS/3.94-GCCcore-13.2.0 x x x x x x x x x NSS/3.89.1-GCCcore-12.3.0 x x x x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Nextflow/","title":"Nextflow","text":"

Nextflow is a reactive workflow framework and a programming DSL that eases writing computational pipelines with complex data

https://www.nextflow.io/

"},{"location":"available_software/detail/Nextflow/#available-modules","title":"Available modules","text":"

The overview below shows which Nextflow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Nextflow, load one of these modules using a module load command like:

module load Nextflow/23.10.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Nextflow/23.10.0 x x x x x x x x x"},{"location":"available_software/detail/Ninja/","title":"Ninja","text":"

Ninja is a small build system with a focus on speed.

https://ninja-build.org/

"},{"location":"available_software/detail/Ninja/#available-modules","title":"Available modules","text":"

The overview below shows which Ninja installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ninja, load one of these modules using a module load command like:

module load Ninja/1.11.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ninja/1.11.1-GCCcore-13.2.0 x x x x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OPARI2/","title":"OPARI2","text":"

OPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.

https://www.score-p.org

"},{"location":"available_software/detail/OPARI2/#available-modules","title":"Available modules","text":"

The overview below shows which OPARI2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OPARI2, load one of these modules using a module load command like:

module load OPARI2/2.0.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OPARI2/2.0.8-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/OSU-Micro-Benchmarks/","title":"OSU-Micro-Benchmarks","text":"

OSU Micro-Benchmarks

https://mvapich.cse.ohio-state.edu/benchmarks/

"},{"location":"available_software/detail/OSU-Micro-Benchmarks/#available-modules","title":"Available modules","text":"

The overview below shows which OSU-Micro-Benchmarks installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x x x x OSU-Micro-Benchmarks/7.2-gompi-2023a-CUDA-12.1.1 x x x x x x - x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/OTF2/","title":"OTF2","text":"

The Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.

https://www.score-p.org

"},{"location":"available_software/detail/OTF2/#available-modules","title":"Available modules","text":"

The overview below shows which OTF2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OTF2, load one of these modules using a module load command like:

module load OTF2/3.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OTF2/3.0.3-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/OpenBLAS/","title":"OpenBLAS","text":"

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

http://www.openblas.net/

"},{"location":"available_software/detail/OpenBLAS/#available-modules","title":"Available modules","text":"

The overview below shows which OpenBLAS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenBLAS, load one of these modules using a module load command like:

module load OpenBLAS/0.3.24-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenEXR/","title":"OpenEXR","text":"

OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications

https://www.openexr.com/

"},{"location":"available_software/detail/OpenEXR/#available-modules","title":"Available modules","text":"

The overview below shows which OpenEXR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenEXR, load one of these modules using a module load command like:

module load OpenEXR/3.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenEXR/3.2.0-GCCcore-13.2.0 x x x x x x x x x OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenFOAM/","title":"OpenFOAM","text":"

OpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.

https://www.openfoam.org/

"},{"location":"available_software/detail/OpenFOAM/#available-modules","title":"Available modules","text":"

The overview below shows which OpenFOAM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenFOAM, load one of these modules using a module load command like:

module load OpenFOAM/v2406-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenFOAM/v2406-foss-2023a x x x x x x x x x OpenFOAM/v2312-foss-2023a x x x x x x x x x OpenFOAM/11-foss-2023a x x x x x x x x x OpenFOAM/10-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/OpenJPEG/","title":"OpenJPEG","text":"

OpenJPEG is an open-source JPEG 2000 codec written in C language. It has been developed in order to promote the use of JPEG 2000, a still-image compression standard from the Joint Photographic Experts Group (JPEG). Since may 2015, it is officially recognized by ISO/IEC and ITU-T as a JPEG 2000 Reference Software.

https://www.openjpeg.org/

"},{"location":"available_software/detail/OpenJPEG/#available-modules","title":"Available modules","text":"

The overview below shows which OpenJPEG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenJPEG, load one of these modules using a module load command like:

module load OpenJPEG/2.5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenJPEG/2.5.0-GCCcore-13.2.0 x x x x x x x x x OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenMPI/","title":"OpenMPI","text":"

The Open MPI Project is an open source MPI-3 implementation.

https://www.open-mpi.org/

"},{"location":"available_software/detail/OpenMPI/#available-modules","title":"Available modules","text":"

The overview below shows which OpenMPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenMPI, load one of these modules using a module load command like:

module load OpenMPI/4.1.6-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenMPI/4.1.6-GCC-13.2.0 x x x x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenPGM/","title":"OpenPGM","text":"

OpenPGM is an open source implementation of the Pragmatic General Multicast (PGM) specification in RFC 3208 available at www.ietf.org. PGM is a reliable and scalable multicast protocol that enables receivers to detect loss, request retransmission of lost data, or notify an application of unrecoverable loss. PGM is a receiver-reliable protocol, which means the receiver is responsible for ensuring all data is received, absolving the sender of reception responsibility.

https://code.google.com/p/openpgm/

"},{"location":"available_software/detail/OpenPGM/#available-modules","title":"Available modules","text":"

The overview below shows which OpenPGM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenPGM, load one of these modules using a module load command like:

module load OpenPGM/5.2.122-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenPGM/5.2.122-GCCcore-13.2.0 x x x x x x x x x OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/OpenSSL/","title":"OpenSSL","text":"

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolchain implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library.

https://www.openssl.org/

"},{"location":"available_software/detail/OpenSSL/#available-modules","title":"Available modules","text":"

The overview below shows which OpenSSL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenSSL, load one of these modules using a module load command like:

module load OpenSSL/1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenSSL/1.1 x x x x x x x x x"},{"location":"available_software/detail/OrthoFinder/","title":"OrthoFinder","text":"

OrthoFinder is a fast, accurate and comprehensive platform for comparative genomics

https://github.com/davidemms/OrthoFinder

"},{"location":"available_software/detail/OrthoFinder/#available-modules","title":"Available modules","text":"

The overview below shows which OrthoFinder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OrthoFinder, load one of these modules using a module load command like:

module load OrthoFinder/2.5.5-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OrthoFinder/2.5.5-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Osi/","title":"Osi","text":"

Osi (Open Solver Interface) provides an abstract base class to a generic linearprogramming (LP) solver, along with derived classes for specific solvers. Manyapplications may be able to use the Osi to insulate themselves from a specificLP solver. That is, programs written to the OSI standard may be linked to anysolver with an OSI interface and should produce correct results. The OSI hasbeen significantly extended compared to its first incarnation. Currently, theOSI supports linear programming solvers and has rudimentary support for integerprogramming.

https://github.com/coin-or/Osi

"},{"location":"available_software/detail/Osi/#available-modules","title":"Available modules","text":"

The overview below shows which Osi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Osi, load one of these modules using a module load command like:

module load Osi/0.108.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Osi/0.108.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PAPI/","title":"PAPI","text":"

PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.

https://icl.cs.utk.edu/projects/papi/

"},{"location":"available_software/detail/PAPI/#available-modules","title":"Available modules","text":"

The overview below shows which PAPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PAPI, load one of these modules using a module load command like:

module load PAPI/7.1.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PAPI/7.1.0-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PCRE/","title":"PCRE","text":"

The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.

https://www.pcre.org/

"},{"location":"available_software/detail/PCRE/#available-modules","title":"Available modules","text":"

The overview below shows which PCRE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PCRE, load one of these modules using a module load command like:

module load PCRE/8.45-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE/8.45-GCCcore-13.2.0 x x x x x x x x x PCRE/8.45-GCCcore-12.3.0 x x x x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PCRE2/","title":"PCRE2","text":"

The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.

https://www.pcre.org/

"},{"location":"available_software/detail/PCRE2/#available-modules","title":"Available modules","text":"

The overview below shows which PCRE2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PCRE2, load one of these modules using a module load command like:

module load PCRE2/10.42-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE2/10.42-GCCcore-13.2.0 x x x x x x x x x PCRE2/10.42-GCCcore-12.3.0 x x x x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PDT/","title":"PDT","text":"

Program Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.

https://www.cs.uoregon.edu/research/pdt/

"},{"location":"available_software/detail/PDT/#available-modules","title":"Available modules","text":"

The overview below shows which PDT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PDT, load one of these modules using a module load command like:

module load PDT/3.25.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PDT/3.25.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PETSc/","title":"PETSc","text":"

PETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.

https://www.mcs.anl.gov/petsc

"},{"location":"available_software/detail/PETSc/#available-modules","title":"Available modules","text":"

The overview below shows which PETSc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PETSc, load one of these modules using a module load command like:

module load PETSc/3.20.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PETSc/3.20.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PGPLOT/","title":"PGPLOT","text":"

The PGPLOT Graphics Subroutine Library is a Fortran- or C-callable,device-independent graphics package for making simple scientific graphs. It is intendedfor making graphical images of publication quality with minimum effort on the part ofthe user. For most applications, the program can be device-independent, and the outputcan be directed to the appropriate device at run time.

https://sites.astro.caltech.edu/~tjp/pgplot/

"},{"location":"available_software/detail/PGPLOT/#available-modules","title":"Available modules","text":"

The overview below shows which PGPLOT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PGPLOT, load one of these modules using a module load command like:

module load PGPLOT/5.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PGPLOT/5.2.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PLUMED/","title":"PLUMED","text":"

PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.

https://www.plumed.org

"},{"location":"available_software/detail/PLUMED/#available-modules","title":"Available modules","text":"

The overview below shows which PLUMED installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PLUMED, load one of these modules using a module load command like:

module load PLUMED/2.9.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLUMED/2.9.2-foss-2023b x x x x x x x x x PLUMED/2.9.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PLY/","title":"PLY","text":"

PLY is yet another implementation of lex and yacc for Python.

https://www.dabeaz.com/ply/

"},{"location":"available_software/detail/PLY/#available-modules","title":"Available modules","text":"

The overview below shows which PLY installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PLY, load one of these modules using a module load command like:

module load PLY/3.11-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLY/3.11-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PMIx/","title":"PMIx","text":"

Process Management for Exascale EnvironmentsPMI Exascale (PMIx) represents an attempt toprovide an extended version of the PMI standard specifically designedto support clusters up to and including exascale sizes. The overallobjective of the project is not to branch the existing pseudo-standarddefinitions - in fact, PMIx fully supports both of the existing PMI-1and PMI-2 APIs - but rather to (a) augment and extend those APIs toeliminate some current restrictions that impact scalability, and (b)provide a reference implementation of the PMI-server that demonstratesthe desired level of scalability.

https://pmix.org/

"},{"location":"available_software/detail/PMIx/#available-modules","title":"Available modules","text":"

The overview below shows which PMIx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PMIx, load one of these modules using a module load command like:

module load PMIx/4.2.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PMIx/4.2.6-GCCcore-13.2.0 x x x x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PROJ/","title":"PROJ","text":"

Program proj is a standard Unix filter function which convertsgeographic longitude and latitude coordinates into cartesian coordinates

https://proj.org

"},{"location":"available_software/detail/PROJ/#available-modules","title":"Available modules","text":"

The overview below shows which PROJ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PROJ, load one of these modules using a module load command like:

module load PROJ/9.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PROJ/9.3.1-GCCcore-13.2.0 x x x x x x x x x PROJ/9.2.0-GCCcore-12.3.0 x x x x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Pango/","title":"Pango","text":"

Pango is a library for laying out and rendering of text, with an emphasis on internationalization.Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in thecontext of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.

https://www.pango.org/

"},{"location":"available_software/detail/Pango/#available-modules","title":"Available modules","text":"

The overview below shows which Pango installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pango, load one of these modules using a module load command like:

module load Pango/1.51.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pango/1.51.0-GCCcore-13.2.0 x x x x x x x x x Pango/1.50.14-GCCcore-12.3.0 x x x x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ParMETIS/","title":"ParMETIS","text":"

ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.

http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview

"},{"location":"available_software/detail/ParMETIS/#available-modules","title":"Available modules","text":"

The overview below shows which ParMETIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ParMETIS, load one of these modules using a module load command like:

module load ParMETIS/4.0.3-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParMETIS/4.0.3-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/ParaView/","title":"ParaView","text":"

ParaView is a scientific parallel visualizer.

https://www.paraview.org

"},{"location":"available_software/detail/ParaView/#available-modules","title":"Available modules","text":"

The overview below shows which ParaView installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ParaView, load one of these modules using a module load command like:

module load ParaView/5.11.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParaView/5.11.2-foss-2023a x x x x x x x x x ParaView/5.11.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Paraver/","title":"Paraver","text":"

A very powerful performance visualization and analysis tool based on traces that can be used to analyse any information that is expressed on its input trace format. Traces for parallel MPI, OpenMP and other programs can be genereated with Extrae.

https://tools.bsc.es/paraver

"},{"location":"available_software/detail/Paraver/#available-modules","title":"Available modules","text":"

The overview below shows which Paraver installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Paraver, load one of these modules using a module load command like:

module load Paraver/4.11.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Paraver/4.11.4-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Perl-bundle-CPAN/","title":"Perl-bundle-CPAN","text":"

A set of common packages from CPAN

https://www.perl.org/

"},{"location":"available_software/detail/Perl-bundle-CPAN/#available-modules","title":"Available modules","text":"

The overview below shows which Perl-bundle-CPAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Perl-bundle-CPAN/#perl-bundle-cpan5361-gcccore-1230","title":"Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.335, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::COW-0.007, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.008, Business::ISBN::Data-20230516.001, Canary::Stability-2013, Capture::Tiny-0.48, Carp::Clan-6.08, Carp::Heavy-1.50, CGI-4.57, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Method::Modifiers-2.15, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Class::XSAccessor-1.19, Clone-0.46, Clone::Choose-0.010, common::sense-3.75, Compress::Raw::Zlib-2.204, Config::General-2.65, Config::INI-0.029, Config::MVP-2.200013, Config::MVP::Reader::INI-2.101465, Config::Simple-4.58, Config::Tiny-2.29, Const::Exporter-1.2.2, Const::Fast-0.014, CPAN::Meta::Check-0.017, CPAN::Uploader-0.103018, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.114, Data::Section-0.200008, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.59, DateTime::Locale-1.38, DateTime::TimeZone-2.60, DateTime::Tiny-1.07, DBD::CSV-0.60, DBD::SQLite-1.72, DBI-1.643, DBIx::Admin::CreateTable-2.11, DBIx::Admin::DSNManager-2.02, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::FindPerl-0.016, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.030, Email::Date::Format-1.008, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter::Declare-0.114, Exporter::Tiny-1.006002, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.70, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.014, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long::Descriptive-0.111, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9726, Graph::ReadWrite-2.10, Hash::Merge-0.302, Hash::Objectify-0.008, Heap-0.80, Hook::LexWrap-0.26, HTML::Entities::Interpolate-1.10, HTML::Form-6.11, HTML::Parser-3.81, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::CookieJar-0.014, HTTP::Cookies-6.10, HTTP::Daemon-6.16, HTTP::Date-6.05, HTTP::Message-6.44, HTTP::Negotiate-6.01, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::Compress::Zip-2.204, IO::HTML-1.004, IO::Socket::SSL-2.083, IO::String-1.08, IO::Stringy-2.113, IO::TieCombine-1.005, IO::Tty-1.17, IO::Tty-1.17, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.10, JSON::MaybeXS-1.004005, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.59, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.71, Log::Dispatch::Array-1.005, Log::Dispatchouli-3.002, Log::Handler-0.90, Log::Log4perl-1.57, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.34, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.70, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.884, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-v1.013.1, MIME::Lite-3.033, MIME::Types-2.24, Mixin::Linewise::Readers-0.111, Mock::Quick-1.111, Module::Build-0.4234, Module::Build::Tiny-0.045, Module::Build::XSUtil-0.19, Module::CoreList-5.20230423, Module::Implementation-0.09, Module::Install-1.21, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000038, Module::Path-0.19, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005005, Moose-2.2203, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.007, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.203, MooseX::Types-0.50, MooseX::Types::Perl-0.101344, Mouse-v2.5.10, Mozilla::CA-20221114, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.15, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.18, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.31, Params::ValidationCompiler-0.31, parent-0.241, Parse::RecDescent-1.967015, Parse::Yapp-1.21, Path::Tiny-0.144, PDF::API2-2.044, Perl::OSType-1.010, Perl::PrereqScanner-1.100, PerlIO::utf8_strict-0.010, Pod::Elemental-0.103006, Pod::Escapes-1.07, Pod::Eventual-0.094003, Pod::LaTeX-0.61, Pod::Man-5.01, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.45, Pod::Weaver-4.019, PPI-1.276, Readonly-2.05, Ref::Util-0.204, Regexp::Common-2017060201, Role::HasMessage-0.007, Role::Identifiable::HasIdent-0.009, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104003, Specio-0.48, Spiffy-0.46, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Errf-0.009, String::Flogger-1.101246, String::Formatter-1.235, String::Print-0.94, String::RewritePrefix-0.009, String::Truncate-1.100603, String::TtyLength-0.03, Sub::Exporter-0.989, Sub::Exporter::ForMethods-0.100055, Sub::Exporter::GlobExporter-0.006, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.929, Sub::Name-0.27, Sub::Quote-2.006008, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Linux::Device::CPU-0.7905, Sys::Info::Driver::Unknown-0.79, Sys::Info::Driver::Unknown::Device::CPU-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.45, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000155, Test::Base-0.89, Test::CheckDeps-0.010, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.204, Test::Differences-0.69, Test::Exception-0.43, Test::FailWarnings-0.008, Test::Fatal-0.017, Test::File-1.993, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More::UTF8-0.05, Test::Most-0.38, Test::Needs-0.002010, Test::NoWarnings-1.06, Test::Object-0.08, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302195, Test::SubCalls-1.10, Test::Sys::Info-0.23, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.21, Test::YAML-1.07, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.50, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::Soundex-3.05, Text::Table-1.135, Text::Table::Manifold-1.03, Text::Template-1.61, Throwable-1.001, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.35, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Type::Tiny-2.004000, Types::Serialiser-1.01, Types::Serialiser-1.01, Unicode::EastAsianWidth-12.0, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, Unix::Processors-2.046, URI-5.19, Variable::Magic-0.63, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::Writer-0.900, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.74

"},{"location":"available_software/detail/Perl/","title":"Perl","text":"

Larry Wall's Practical Extraction and Report LanguageIncludes a small selection of extra CPAN packages for core functionality.

https://www.perl.org/

"},{"location":"available_software/detail/Perl/#available-modules","title":"Available modules","text":"

The overview below shows which Perl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Perl, load one of these modules using a module load command like:

module load Perl/5.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl/5.38.0-GCCcore-13.2.0 x x x x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x x x x Perl/5.36.0-GCCcore-12.2.0-minimal x x x x x x - x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Perl/#perl5380-gcccore-1320","title":"Perl/5.38.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21

"},{"location":"available_software/detail/Perl/#perl5361-gcccore-1230","title":"Perl/5.36.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21

"},{"location":"available_software/detail/Perl/#perl5360-gcccore-1220","title":"Perl/5.36.0-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.334, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.007, Business::ISBN::Data-20210112.006, Canary::Stability-2013, Capture::Tiny-0.48, Carp-1.50, Carp::Clan-6.08, Carp::Heavy-1.50, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Clone-0.45, Clone::Choose-0.010, common::sense-3.75, Config::General-2.65, Config::INI-0.027, Config::MVP-2.200012, Config::Simple-4.58, Config::Tiny-2.28, constant-1.33, CPAN::Meta::Check-0.014, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper-2.183, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.112, Data::Section-0.200007, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.58, DateTime::Locale-1.36, DateTime::TimeZone-2.53, DateTime::Tiny-1.07, DBD::CSV-0.59, DBD::SQLite-1.70, DBI-1.643, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.025, Email::Date::Format-1.005, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter-5.74, Exporter::Declare-0.114, Exporter::Tiny-1.004000, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.64, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::Path-2.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.013, File::Spec-3.75, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long-2.52, Getopt::Long::Descriptive-0.110, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9725, Graph::ReadWrite-2.10, Hash::Merge-0.302, Heap-0.80, HTML::Entities::Interpolate-1.10, HTML::Form-6.10, HTML::Parser-3.78, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::Cookies-6.10, HTTP::Daemon-6.14, HTTP::Date-6.05, HTTP::Negotiate-6.01, HTTP::Request-6.37, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::HTML-1.004, IO::Socket::SSL-2.075, IO::String-1.08, IO::Stringy-2.113, IO::Tty-1.16, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.09, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.58, List::Util-1.63, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.70, Log::Dispatchouli-2.023, Log::Handler-0.90, Log::Log4perl-1.56, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.33, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.67, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.879, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-1.013.1, MIME::Lite-3.033, MIME::Types-2.22, Mixin::Linewise::Readers-0.110, Mock::Quick-1.111, Module::Build-0.4231, Module::Build::Tiny-0.039, Module::Build::XSUtil-0.19, Module::CoreList-5.20220820, Module::Implementation-0.09, Module::Install-1.19, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000037, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005004, Moose-2.2201, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.006, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.201, MooseX::Types-0.50, MooseX::Types::Perl-0.101343, Mouse-v2.5.10, Mozilla::CA-20211001, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.14, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.17, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.30, Params::ValidationCompiler-0.30, parent-0.238, Parse::RecDescent-1.967015, Path::Tiny-0.124, PDF::API2-2.043, Perl::OSType-1.010, PerlIO::utf8_strict-0.009, Pod::Elemental-0.103005, Pod::Escapes-1.07, Pod::Eventual-0.094002, Pod::LaTeX-0.61, Pod::Man-4.14, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.43, Pod::Weaver-4.018, Readonly-2.05, Regexp::Common-2017060201, Role::HasMessage-0.006, Role::Identifiable::HasIdent-0.008, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104002, Specio-0.48, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Flogger-1.101245, String::Print-0.94, String::RewritePrefix-0.008, String::Truncate-1.100602, Sub::Exporter-0.988, Sub::Exporter::ForMethods-0.100054, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.928, Sub::Name-0.26, Sub::Quote-2.006006, Sub::Uplevel-0.2800, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Unknown-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.42, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000145, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.130, Test::Differences-0.69, Test::Exception-0.43, Test::Fatal-0.016, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More-1.302191, Test::More::UTF8-0.05, Test::Most-0.37, Test::Needs-0.002009, Test::NoWarnings-1.06, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302191, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.20, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.48, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::ParseWords-3.31, Text::Soundex-3.05, Text::Table-1.134, Text::Template-1.61, Thread::Queue-3.13, Throwable-1.000, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.30, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Types::Serialiser-1.01, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, URI-5.12, URI::Escape-5.12, Variable::Magic-0.62, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.73

"},{"location":"available_software/detail/Pillow-SIMD/","title":"Pillow-SIMD","text":"

Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.

https://github.com/uploadcare/pillow-simd

"},{"location":"available_software/detail/Pillow-SIMD/#available-modules","title":"Available modules","text":"

The overview below shows which Pillow-SIMD installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pillow-SIMD, load one of these modules using a module load command like:

module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pillow/","title":"Pillow","text":"

Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.

https://pillow.readthedocs.org/

"},{"location":"available_software/detail/Pillow/#available-modules","title":"Available modules","text":"

The overview below shows which Pillow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pillow, load one of these modules using a module load command like:

module load Pillow/10.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow/10.2.0-GCCcore-13.2.0 x x x x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Pint/","title":"Pint","text":"

Pint is a Python package to define, operate andmanipulate physical quantities: the product of a numerical value and aunit of measurement. It allows arithmetic operations between them andconversions from and to different units.

https://github.com/hgrecco/pint

"},{"location":"available_software/detail/Pint/#available-modules","title":"Available modules","text":"

The overview below shows which Pint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pint, load one of these modules using a module load command like:

module load Pint/0.24-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pint/0.24-GCCcore-13.2.0 x x x x x x x x x Pint/0.23-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pint/#pint024-gcccore-1320","title":"Pint/0.24-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

appdirs-1.4.4, flexcache-0.3, flexparser-0.3.1, Pint-0.24

"},{"location":"available_software/detail/PostgreSQL/","title":"PostgreSQL","text":"

PostgreSQL is a powerful, open source object-relational database system. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

https://www.postgresql.org/

"},{"location":"available_software/detail/PostgreSQL/#available-modules","title":"Available modules","text":"

The overview below shows which PostgreSQL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PostgreSQL, load one of these modules using a module load command like:

module load PostgreSQL/16.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PostgreSQL/16.1-GCCcore-13.2.0 x x x x x x x x x PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PuLP/","title":"PuLP","text":"

PuLP is an LP modeler written in Python. PuLP can generate MPS or LP files andcall GLPK, COIN-OR CLP/CBC, CPLEX, GUROBI, MOSEK, XPRESS, CHOCO, MIPCL, SCIP tosolve linear problems.

https://github.com/coin-or/pulp

"},{"location":"available_software/detail/PuLP/#available-modules","title":"Available modules","text":"

The overview below shows which PuLP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PuLP, load one of these modules using a module load command like:

module load PuLP/2.8.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PuLP/2.8.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PyOpenGL/","title":"PyOpenGL","text":"

PyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.

http://pyopengl.sourceforge.net

"},{"location":"available_software/detail/PyOpenGL/#available-modules","title":"Available modules","text":"

The overview below shows which PyOpenGL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyOpenGL, load one of these modules using a module load command like:

module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyOpenGL/#pyopengl317-gcccore-1230","title":"PyOpenGL/3.1.7-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

PyOpenGL-3.1.7, PyOpenGL-accelerate-3.1.7

"},{"location":"available_software/detail/PyQt-builder/","title":"PyQt-builder","text":"

PyQt-builder is the PEP 517 compliant build system for PyQt and projects that extend PyQt. It extends the SIP build system and uses Qt\u2019s qmake to perform the actual compilation and installation of extension modules.

http://www.example.com

"},{"location":"available_software/detail/PyQt-builder/#available-modules","title":"Available modules","text":"

The overview below shows which PyQt-builder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyQt-builder, load one of these modules using a module load command like:

module load PyQt-builder/1.15.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt-builder/1.15.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/#pyqt-builder1154-gcccore-1230","title":"PyQt-builder/1.15.4-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

PyQt-builder-1.15.4

"},{"location":"available_software/detail/PyQt5/","title":"PyQt5","text":"

PyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company.This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company\u2019s Qt WebEngine framework.

https://www.riverbankcomputing.com/software/pyqt

"},{"location":"available_software/detail/PyQt5/#available-modules","title":"Available modules","text":"

The overview below shows which PyQt5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyQt5, load one of these modules using a module load command like:

module load PyQt5/5.15.10-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt5/5.15.10-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyTorch/","title":"PyTorch","text":"

Tensors and Dynamic neural networks in Python with strong GPU acceleration.PyTorch is a deep learning framework that puts Python first.

https://pytorch.org/

"},{"location":"available_software/detail/PyTorch/#available-modules","title":"Available modules","text":"

The overview below shows which PyTorch installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyTorch, load one of these modules using a module load command like:

module load PyTorch/2.1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyTorch/2.1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PyYAML/","title":"PyYAML","text":"

PyYAML is a YAML parser and emitter for the Python programming language.

https://github.com/yaml/pyyaml

"},{"location":"available_software/detail/PyYAML/#available-modules","title":"Available modules","text":"

The overview below shows which PyYAML installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyYAML, load one of these modules using a module load command like:

module load PyYAML/6.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyYAML/6.0.1-GCCcore-13.2.0 x x x x x x x x x PyYAML/6.0-GCCcore-12.3.0 x x x x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PyZMQ/","title":"PyZMQ","text":"

Python bindings for ZeroMQ

https://www.zeromq.org/bindings:python

"},{"location":"available_software/detail/PyZMQ/#available-modules","title":"Available modules","text":"

The overview below shows which PyZMQ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyZMQ, load one of these modules using a module load command like:

module load PyZMQ/25.1.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pygments/","title":"Pygments","text":"

Generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code.

https://pygments.org/

"},{"location":"available_software/detail/Pygments/#available-modules","title":"Available modules","text":"

The overview below shows which Pygments installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pygments, load one of these modules using a module load command like:

module load Pygments/2.18.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pygments/2.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pysam/","title":"Pysam","text":"

Pysam is a python module for reading and manipulating Samfiles. It's a lightweight wrapper of the samtools C-API. Pysam also includes an interface for tabix.

https://github.com/pysam-developers/pysam

"},{"location":"available_software/detail/Pysam/#available-modules","title":"Available modules","text":"

The overview below shows which Pysam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pysam, load one of these modules using a module load command like:

module load Pysam/0.22.0-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pysam/0.22.0-GCC-12.3.0 x x x x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Python-bundle-PyPI/","title":"Python-bundle-PyPI","text":"

Bundle of Python packages from PyPI

https://python.org/

"},{"location":"available_software/detail/Python-bundle-PyPI/#available-modules","title":"Available modules","text":"

The overview below shows which Python-bundle-PyPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Python-bundle-PyPI, load one of these modules using a module load command like:

module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202310-gcccore-1320","title":"Python-bundle-PyPI/2023.10-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.13.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.6, bitarray-2.8.2, bitstring-4.1.2, blist-1.3.6, cachecontrol-0.13.1, cachy-0.3.0, certifi-2023.7.22, cffi-1.16.0, chardet-5.2.0, charset-normalizer-3.3.1, cleo-2.0.1, click-8.1.7, cloudpickle-3.0.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-3.0.4, decorator-5.1.1, distlib-0.3.7, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.6, ecdsa-0.18.0, editables-0.5, exceptiongroup-1.1.3, execnet-2.0.2, filelock-3.13.0, fsspec-2023.10.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.8.0, importlib_resources-6.1.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.3.0, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.3.2, jsonschema-4.17.3, keyring-24.2.0, keyrings.alt-5.0.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.1.0, more-itertools-10.1.0, msgpack-1.0.7, netaddr-0.9.0, netifaces-0.11.0, packaging-23.2, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.2, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, pluggy-1.3.0, pooch-1.8.0, psutil-5.9.6, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.19.0, pydevtool-0.3.0, Pygments-2.16.1, Pygments-2.16.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.1, pyrsistent-0.20.0, pytest-7.4.3, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3.post1, rapidfuzz-2.15.2, regex-2023.10.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.6.0, rich-click-1.7.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.4, simplegeneric-0.8.1, simplejson-3.19.2, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, sphinx-7.2.6, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib_applehelp-1.0.7, sphinxcontrib_devhelp-1.0.5, sphinxcontrib_htmlhelp-2.0.4, sphinxcontrib_qthelp-1.0.6, sphinxcontrib_serializinghtml-1.1.9, sphinxcontrib_websupport-1.2.6, tabulate-0.9.0, threadpoolctl-3.2.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.12.1, ujson-5.8.0, urllib3-2.0.7, wcwidth-0.2.8, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.17.0

"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202306-gcccore-1230","title":"Python-bundle-PyPI/2023.06-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.12.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.5, bitstring-4.0.2, blist-1.3.6, CacheControl-0.12.14, cachy-0.3.0, certifi-2023.5.7, cffi-1.15.1, chardet-5.1.0, charset-normalizer-3.1.0, cleo-2.0.1, click-8.1.3, cloudpickle-2.2.1, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-0.29.35, decorator-5.1.1, distlib-0.3.6, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.5, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.1.1, execnet-1.9.0, filelock-3.12.2, fsspec-2023.6.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.7.0, importlib_resources-5.12.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.3, keyring-23.13.1, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.0.2, more-itertools-9.1.0, msgpack-1.0.5, netaddr-0.8.0, netifaces-0.11.0, packaging-23.1, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.1, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, pluggy-1.2.0, pooch-1.7.0, psutil-5.9.5, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.18.0, pydevtool-0.3.0, Pygments-2.15.1, Pygments-2.15.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.0, pyrsistent-0.19.3, pytest-7.4.0, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3, rapidfuzz-2.15.1, regex-2023.6.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.4.2, rich-click-1.6.1, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.0.post1, simplegeneric-0.8.1, simplejson-3.19.1, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-7.0.1, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.4, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.8, ujson-5.8.0, urllib3-1.26.16, wcwidth-0.2.6, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.15.0

"},{"location":"available_software/detail/Python/","title":"Python","text":"

Python is a programming language that lets you work more quickly and integrate your systems more effectively.

https://python.org/

"},{"location":"available_software/detail/Python/#available-modules","title":"Available modules","text":"

The overview below shows which Python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Python, load one of these modules using a module load command like:

module load Python/3.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python/3.11.5-GCCcore-13.2.0 x x x x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x - x x Python/3.10.8-GCCcore-12.2.0 x x x x x x - x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x - x x"},{"location":"available_software/detail/Python/#python3115-gcccore-1320","title":"Python/3.11.5-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

flit_core-3.9.0, packaging-23.2, pip-23.2.1, setuptools-68.2.2, setuptools-scm-8.0.4, tomli-2.0.1, typing_extensions-4.8.0, wheel-0.41.2

"},{"location":"available_software/detail/Python/#python3113-gcccore-1230","title":"Python/3.11.3-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

flit_core-3.9.0, packaging-23.1, pip-23.1.2, setuptools-67.7.2, setuptools_scm-7.1.0, tomli-2.0.1, typing_extensions-4.6.3, wheel-0.40.0

"},{"location":"available_software/detail/Python/#python3108-gcccore-1220","title":"Python/3.10.8-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.12, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-22.1.0, Babel-2.11.0, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.4, bcrypt-4.0.1, bitstring-3.1.9, blist-1.3.6, CacheControl-0.12.11, cachy-0.3.0, certifi-2022.9.24, cffi-1.15.1, chardet-5.0.0, charset-normalizer-2.1.1, cleo-1.0.0a5, click-8.1.3, clikit-0.6.2, cloudpickle-2.2.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.3.1, cryptography-38.0.3, Cython-0.29.32, decorator-5.1.1, distlib-0.3.6, docopt-0.6.2, docutils-0.19, doit-0.36.0, dulwich-0.20.50, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.0.1, execnet-1.9.0, filelock-3.8.0, flit-3.8.0, flit_core-3.8.0, flit_scm-1.7.0, fsspec-2022.11.0, future-0.18.2, glob2-0.7, hatch_fancy_pypi_readme-22.8.0, hatch_vcs-0.2.0, hatchling-1.11.1, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-5.0.0, importlib_resources-5.10.0, iniconfig-1.1.1, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.0, keyring-23.11.0, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, MarkupSafe-2.1.1, mock-4.0.3, more-itertools-9.0.0, msgpack-1.0.4, netaddr-0.8.0, netifaces-0.11.0, packaging-21.3, paramiko-2.12.0, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.10.1, pbr-5.11.0, pexpect-4.8.0, pip-22.3.1, pkginfo-1.8.3, platformdirs-2.5.3, pluggy-1.0.0, poetry-1.2.2, poetry-core-1.3.2, poetry_plugin_export-1.2.0, pooch-1.6.0, psutil-5.9.4, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.4.8, pycparser-2.21, pycryptodome-3.17, pydevtool-0.3.0, Pygments-2.13.0, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.0.9, pyrsistent-0.19.2, pytest-7.2.0, pytest-xdist-3.1.0, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2022.6, regex-2022.10.31, requests-2.28.1, requests-toolbelt-0.9.1, rich-13.1.0, rich-click-1.6.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, setuptools-63.4.3, setuptools-rust-1.5.2, setuptools_scm-7.0.5, shellingham-1.5.0, simplegeneric-0.8.1, simplejson-3.17.6, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-5.3.0, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.2, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.0, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.6, typing_extensions-4.4.0, ujson-5.5.0, urllib3-1.26.12, virtualenv-20.16.6, wcwidth-0.2.5, webencodings-0.5.1, wheel-0.38.4, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.10.0

"},{"location":"available_software/detail/Qhull/","title":"Qhull","text":"

Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull.

http://www.qhull.org

"},{"location":"available_software/detail/Qhull/#available-modules","title":"Available modules","text":"

The overview below shows which Qhull installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Qhull, load one of these modules using a module load command like:

module load Qhull/2020.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qhull/2020.2-GCCcore-13.2.0 x x x x x x x x x Qhull/2020.2-GCCcore-12.3.0 x x x x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Qt5/","title":"Qt5","text":"

Qt is a comprehensive cross-platform C++ application framework.

https://qt.io/

"},{"location":"available_software/detail/Qt5/#available-modules","title":"Available modules","text":"

The overview below shows which Qt5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Qt5, load one of these modules using a module load command like:

module load Qt5/5.15.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qt5/5.15.13-GCCcore-13.2.0 x x x x x x x x x Qt5/5.15.10-GCCcore-12.3.0 x x x x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/QuantumESPRESSO/","title":"QuantumESPRESSO","text":"

Quantum ESPRESSO is an integrated suite of computer codesfor electronic-structure calculations and materials modeling at the nanoscale.It is based on density-functional theory, plane waves, and pseudopotentials(both norm-conserving and ultrasoft).

https://www.quantum-espresso.org

"},{"location":"available_software/detail/QuantumESPRESSO/#available-modules","title":"Available modules","text":"

The overview below shows which QuantumESPRESSO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using QuantumESPRESSO, load one of these modules using a module load command like:

module load QuantumESPRESSO/7.3.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 QuantumESPRESSO/7.3.1-foss-2023a x x x x x x x x x QuantumESPRESSO/7.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/R-bundle-Bioconductor/","title":"R-bundle-Bioconductor","text":"

Bioconductor provides tools for the analysis and coprehension of high-throughput genomic data.

https://bioconductor.org

"},{"location":"available_software/detail/R-bundle-Bioconductor/#available-modules","title":"Available modules","text":"

The overview below shows which R-bundle-Bioconductor installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x - x x"},{"location":"available_software/detail/R-bundle-Bioconductor/#r-bundle-bioconductor318-foss-2023a-r-432","title":"R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2","text":"

This is a list of extensions included in the module:

affxparser-1.74.0, affy-1.80.0, affycoretools-1.74.0, affyio-1.72.0, AgiMicroRna-2.52.0, agricolae-1.3-7, ALDEx2-1.34.0, ALL-1.44.0, ANCOMBC-2.4.0, annaffy-1.74.0, annotate-1.80.0, AnnotationDbi-1.64.1, AnnotationFilter-1.26.0, AnnotationForge-1.44.0, AnnotationHub-3.10.0, anytime-0.3.9, aroma.affymetrix-3.2.1, aroma.apd-0.7.0, aroma.core-3.3.0, aroma.light-3.32.0, ash-1.0-15, ATACseqQC-1.26.0, AUCell-1.24.0, aws.s3-0.3.21, aws.signature-0.6.0, babelgene-22.9, ballgown-2.34.0, basilisk-1.14.2, basilisk.utils-1.14.1, batchelor-1.18.1, baySeq-2.36.0, beachmat-2.18.0, BH-1.84.0-0, Biobase-2.62.0, BiocBaseUtils-1.4.0, BiocFileCache-2.10.1, BiocGenerics-0.48.0, BiocIO-1.12.0, BiocManager-1.30.22, BiocNeighbors-1.20.2, BiocParallel-1.36.0, BiocSingular-1.18.0, BiocStyle-2.30.0, BiocVersion-3.18.1, biomaRt-2.58.0, biomformat-1.30.0, Biostrings-2.70.0, biovizBase-1.50.0, blme-1.0-5, bluster-1.12.0, bookdown-0.37, BSgenome-1.70.1, BSgenome.Cfamiliaris.UCSC.canFam3-1.4.0, BSgenome.Hsapiens.UCSC.hg19-1.4.3, BSgenome.Hsapiens.UCSC.hg38-1.4.5, BSgenome.Mmusculus.UCSC.mm10-1.4.3, bsseq-1.38.0, bumphunter-1.44.0, ca-0.71.1, CAGEfightR-1.22.0, CAGEr-2.8.0, CAMERA-1.58.0, Category-2.68.0, ccdata-1.28.0, ccmap-1.28.0, CGHbase-1.62.0, CGHcall-2.64.0, ChIPpeakAnno-3.36.0, chromVAR-1.24.0, clusterProfiler-4.10.0, CNEr-1.38.0, coloc-5.2.3, colorRamps-2.3.1, ComplexHeatmap-2.18.0, ConsensusClusterPlus-1.66.0, conumee-1.36.0, crossmeta-1.28.0, cummeRbund-2.44.0, cytolib-2.14.1, CytoML-2.14.0, dada2-1.30.0, ddPCRclust-1.22.0, DECIPHER-2.30.0, DeconRNASeq-1.44.0, decontam-1.22.0, decoupleR-2.8.0, DEGseq-1.56.1, DelayedArray-0.28.0, DelayedMatrixStats-1.24.0, densEstBayes-1.0-2.2, derfinder-1.36.0, derfinderHelper-1.36.0, DESeq2-1.42.0, diffcyt-1.22.0, dir.expiry-1.10.0, directlabels-2024.1.21, DirichletMultinomial-1.44.0, DNABarcodes-1.32.0, DNAcopy-1.76.0, DO.db-2.9, docopt-0.7.1, DOSE-3.28.2, dqrng-0.3.2, DRIMSeq-1.30.0, DropletUtils-1.22.0, DSS-2.50.1, dupRadar-1.32.0, DynDoc-1.80.0, EBImage-4.44.0, edgeR-4.0.12, egg-0.4.5, emmeans-1.10.0, enrichplot-1.22.0, EnsDb.Hsapiens.v75-2.99.0, EnsDb.Hsapiens.v79-2.99.0, EnsDb.Hsapiens.v86-2.99.0, ensembldb-2.26.0, escape-1.12.0, estimability-1.4.1, ExperimentHub-2.10.0, extraDistr-1.10.0, factoextra-1.0.7, fANCOVA-0.6-1, fda-6.1.4, FDb.InfiniumMethylation.hg19-2.2.0, fds-1.8, feature-1.2.15, fgsea-1.28.0, filelock-1.0.3, flowAI-1.32.0, flowClean-1.40.0, flowClust-3.40.0, flowCore-2.14.0, flowDensity-1.36.1, flowFP-1.60.0, flowMerge-2.50.0, flowPeaks-1.48.0, FlowSOM-2.10.0, FlowSorted.Blood.EPIC-2.6.0, FlowSorted.CordBloodCombined.450k-1.18.0, flowStats-4.14.1, flowViz-1.66.0, flowWorkspace-4.14.2, FRASER-1.14.0, fresh-0.2.0, gcrma-2.74.0, gdsfmt-1.38.0, genefilter-1.84.0, geneLenDataBase-1.38.0, geneplotter-1.80.0, GENESIS-2.32.0, GENIE3-1.24.0, GenomeInfoDb-1.38.5, GenomeInfoDbData-1.2.11, GenomicAlignments-1.38.2, GenomicFeatures-1.54.1, GenomicFiles-1.38.0, GenomicInteractions-1.36.0, GenomicRanges-1.54.1, GenomicScores-2.14.3, GEOmap-2.5-5, GEOquery-2.70.0, ggbio-1.50.0, ggcyto-1.30.0, ggdendro-0.1.23, ggnewscale-0.4.9, ggpointdensity-0.1.0, ggrastr-1.0.2, ggseqlogo-0.1, ggthemes-5.0.0, ggtree-3.10.0, GLAD-2.66.0, Glimma-2.12.0, GlobalAncova-4.20.0, globaltest-5.56.0, GO.db-3.18.0, GOSemSim-2.28.1, goseq-1.54.0, GOstats-2.68.0, graph-1.80.0, graphite-1.48.0, GSEABase-1.64.0, gsmoothr-0.1.7, gson-0.1.0, GSVA-1.50.0, Gviz-1.46.1, GWASExactHW-1.01, GWASTools-1.48.0, HDF5Array-1.30.0, HDO.db-0.99.1, hdrcde-3.4, heatmaply-1.5.0, hgu133plus2.db-3.13.0, HiCBricks-1.20.0, HiCcompare-1.24.0, HMMcopy-1.44.0, Homo.sapiens-1.3.1, IHW-1.30.0, IlluminaHumanMethylation450kanno.ilmn12.hg19-0.6.1, IlluminaHumanMethylation450kmanifest-0.4.0, IlluminaHumanMethylationEPICanno.ilm10b2.hg19-0.6.0, IlluminaHumanMethylationEPICanno.ilm10b4.hg19-0.6.0, IlluminaHumanMethylationEPICmanifest-0.3.0, illuminaio-0.44.0, impute-1.76.0, InteractionSet-1.30.0, interactiveDisplayBase-1.40.0, intervals-0.15.4, IRanges-2.36.0, isva-1.9, JASPAR2020-0.99.10, KEGGgraph-1.62.0, KEGGREST-1.42.0, LEA-3.14.0, limma-3.58.1, log4r-0.4.3, lpsymphony-1.30.0, lsa-0.73.3, lumi-2.54.0, M3Drop-1.28.0, marray-1.80.0, maSigPro-1.74.0, MassSpecWavelet-1.68.0, MatrixGenerics-1.14.0, MBA-0.1-0, MEDIPS-1.54.0, MetaboCoreUtils-1.10.0, metagenomeSeq-1.43.0, metaMA-3.1.3, metap-1.9, metapod-1.10.1, MethylSeekR-1.42.0, methylumi-2.48.0, Mfuzz-2.62.0, mia-1.10.0, minfi-1.48.0, missMethyl-1.36.0, mixOmics-6.26.0, mixsqp-0.3-54, MLInterfaces-1.82.0, MotifDb-1.44.0, motifmatchr-1.24.0, motifStack-1.46.0, MsCoreUtils-1.14.1, MsExperiment-1.4.0, MsFeatures-1.10.0, msigdbr-7.5.1, MSnbase-2.28.1, MSstats-4.10.0, MSstatsConvert-1.12.0, MSstatsLiP-1.8.1, MSstatsPTM-2.4.2, MSstatsTMT-2.10.0, MultiAssayExperiment-1.28.0, MultiDataSet-1.30.0, multtest-2.58.0, muscat-1.16.0, mutoss-0.1-13, mzID-1.40.0, mzR-2.36.0, NADA-1.6-1.1, ncdfFlow-2.48.0, NMF-0.26, NOISeq-2.46.0, numbat-1.3.2-1, oligo-1.66.0, oligoClasses-1.64.0, ontologyIndex-2.11, oompaBase-3.2.9, oompaData-3.1.3, openCyto-2.14.0, org.Hs.eg.db-3.18.0, org.Mm.eg.db-3.18.0, org.Rn.eg.db-3.18.0, OrganismDbi-1.44.0, OUTRIDER-1.20.0, pathview-1.42.0, pcaMethods-1.94.0, perm-1.0-0.4, PFAM.db-3.18.0, phyloseq-1.46.0, plyranges-1.22.0, pmp-1.14.0, polyester-1.38.0, poweRlaw-0.70.6, preprocessCore-1.64.0, pRoloc-1.42.0, pRolocdata-1.40.0, pRolocGUI-2.12.0, ProtGenerics-1.34.0, PRROC-1.3.1, PSCBS-0.66.0, PureCN-2.8.1, qap-0.1-2, QDNAseq-1.38.0, QFeatures-1.12.0, qlcMatrix-0.9.7, qqconf-1.3.2, quantsmooth-1.68.0, qvalue-2.34.0, R.devices-2.17.1, R.filesets-2.15.0, R.huge-0.10.1, rainbow-3.8, randomcoloR-1.1.0.1, rARPACK-0.11-0, RBGL-1.78.0, RcisTarget-1.22.0, RcppAnnoy-0.0.22, RcppHNSW-0.5.0, RcppML-0.3.7, RcppZiggurat-0.1.6, reactome.db-1.86.2, ReactomePA-1.46.0, regioneR-1.34.0, reldist-1.7-2, remaCor-0.0.16, Repitools-1.48.0, ReportingTools-2.42.3, ResidualMatrix-1.12.0, restfulr-0.0.15, Rfast-2.1.0, RFOC-3.4-10, rGADEM-2.50.0, Rgraphviz-2.46.0, rhdf5-2.46.1, rhdf5filters-1.14.1, Rhdf5lib-1.24.1, Rhtslib-2.4.1, Ringo-1.66.0, RNASeqPower-1.42.0, RnBeads-2.20.0, RnBeads.hg19-1.34.0, RnBeads.hg38-1.34.0, RnBeads.mm10-2.10.0, RnBeads.mm9-1.34.0, RnBeads.rn5-1.34.0, ROC-1.78.0, rols-2.30.0, ROntoTools-2.30.0, ropls-1.34.0, RPMG-2.2-7, RProtoBufLib-2.14.0, Rsamtools-2.18.0, RSEIS-4.1-6, Rsubread-2.16.1, rsvd-1.0.5, rtracklayer-1.62.0, Rwave-2.6-5, S4Arrays-1.2.0, S4Vectors-0.40.2, samr-3.0, SamSPECTRAL-1.56.0, SC3-1.30.0, ScaledMatrix-1.10.0, SCANVIS-1.16.0, scater-1.30.1, scattermore-1.2, scDblFinder-1.16.0, scistreer-1.2.0, scran-1.30.2, scrime-1.3.5, scuttle-1.12.0, SeqArray-1.42.0, seqLogo-1.68.0, SeqVarTools-1.40.0, seriation-1.5.4, Seurat-5.0.1, SeuratObject-5.0.1, shinyBS-0.61.1, shinydashboardPlus-2.0.3, shinyFiles-0.9.3, shinyhelper-0.3.2, shinypanel-0.1.5, shinyWidgets-0.8.1, ShortRead-1.60.0, siggenes-1.76.0, Signac-1.12.0, simplifyEnrichment-1.12.0, SingleCellExperiment-1.24.0, SingleR-2.4.1, sitmo-2.0.2, slingshot-2.10.0, SMVar-1.3.4, SNPRelate-1.36.0, snpStats-1.52.0, SparseArray-1.2.3, sparseMatrixStats-1.14.0, sparsesvd-0.2-2, SpatialExperiment-1.12.0, Spectra-1.12.0, SPIA-2.54.0, splancs-2.01-44, SPOTlight-1.6.7, stageR-1.24.0, struct-1.14.0, structToolbox-1.14.0, SummarizedExperiment-1.32.0, susieR-0.12.35, sva-3.50.0, TailRank-3.2.2, TFBSTools-1.40.0, TFMPvalue-0.0.9, tkWidgets-1.80.0, TrajectoryUtils-1.10.0, treeio-1.26.0, TreeSummarizedExperiment-2.10.0, TSP-1.2-4, TxDb.Hsapiens.UCSC.hg19.knownGene-3.2.2, TxDb.Mmusculus.UCSC.mm10.knownGene-3.10.0, tximport-1.30.0, UCell-2.6.2, uwot-0.1.16, variancePartition-1.32.2, VariantAnnotation-1.48.1, venn-1.12, vsn-3.70.0, waiter-0.2.5, wateRmelon-2.8.0, WGCNA-1.72-5, widgetTools-1.80.0, Wrench-1.20.0, xcms-4.0.2, XVector-0.42.0, zCompositions-1.5.0-1, zellkonverter-1.12.1, zlibbioc-1.48.0

"},{"location":"available_software/detail/R-bundle-Bioconductor/#r-bundle-bioconductor316-foss-2022b-r-422","title":"R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2","text":"

This is a list of extensions included in the module:

affxparser-1.70.0, affy-1.76.0, affycoretools-1.70.0, affyio-1.68.0, AgiMicroRna-2.48.0, agricolae-1.3-5, ALDEx2-1.30.0, ALL-1.40.0, ANCOMBC-2.0.2, annaffy-1.70.0, annotate-1.76.0, AnnotationDbi-1.60.2, AnnotationFilter-1.22.0, AnnotationForge-1.40.1, AnnotationHub-3.6.0, anytime-0.3.9, aroma.affymetrix-3.2.1, aroma.apd-0.6.1, aroma.core-3.3.0, aroma.light-3.28.0, ash-1.0-15, ATACseqQC-1.22.0, AUCell-1.20.2, aws.s3-0.3.21, aws.signature-0.6.0, babelgene-22.9, ballgown-2.30.0, basilisk-1.10.2, basilisk.utils-1.10.0, batchelor-1.14.1, baySeq-2.31.0, beachmat-2.14.0, Biobase-2.58.0, BiocBaseUtils-1.0.0, BiocFileCache-2.6.1, BiocGenerics-0.44.0, BiocIO-1.8.0, BiocManager-1.30.20, BiocNeighbors-1.16.0, BiocParallel-1.32.5, BiocSingular-1.14.0, BiocStyle-2.26.0, BiocVersion-3.16.0, biomaRt-2.54.0, biomformat-1.26.0, Biostrings-2.66.0, biovizBase-1.46.0, blme-1.0-5, bluster-1.8.0, bookdown-0.33, BSgenome-1.66.3, BSgenome.Cfamiliaris.UCSC.canFam3-1.4.0, BSgenome.Hsapiens.UCSC.hg19-1.4.3, BSgenome.Hsapiens.UCSC.hg38-1.4.5, BSgenome.Mmusculus.UCSC.mm10-1.4.3, bsseq-1.34.0, bumphunter-1.40.0, ca-0.71.1, CAGEr-2.4.0, CAMERA-1.54.0, Category-2.64.0, ccdata-1.24.0, ccmap-1.24.0, CGHbase-1.58.0, CGHcall-2.60.0, ChIPpeakAnno-3.32.0, chromVAR-1.20.2, clusterProfiler-4.6.2, CNEr-1.34.0, coloc-5.1.0.1, colorRamps-2.3.1, ComplexHeatmap-2.14.0, ConsensusClusterPlus-1.62.0, conumee-1.32.0, crossmeta-1.24.0, cummeRbund-2.40.0, cytolib-2.10.1, CytoML-2.10.0, dada2-1.26.0, ddPCRclust-1.18.0, DECIPHER-2.26.0, DeconRNASeq-1.40.0, decontam-1.18.0, decoupleR-2.4.0, DEGseq-1.52.0, DelayedArray-0.24.0, DelayedMatrixStats-1.20.0, densEstBayes-1.0-2.1, derfinder-1.32.0, derfinderHelper-1.32.0, DESeq2-1.38.3, diffcyt-1.18.0, dir.expiry-1.6.0, DirichletMultinomial-1.40.0, DNABarcodes-1.28.0, DNAcopy-1.72.3, DO.db-2.9, docopt-0.7.1, DOSE-3.24.2, dqrng-0.3.0, DRIMSeq-1.26.0, DropletUtils-1.18.1, DSS-2.46.0, dupRadar-1.28.0, DynDoc-1.76.0, EBImage-4.40.0, edgeR-3.40.2, egg-0.4.5, emmeans-1.8.5, enrichplot-1.18.3, EnsDb.Hsapiens.v75-2.99.0, EnsDb.Hsapiens.v79-2.99.0, EnsDb.Hsapiens.v86-2.99.0, ensembldb-2.22.0, escape-1.8.0, estimability-1.4.1, ExperimentHub-2.6.0, extraDistr-1.9.1, factoextra-1.0.7, fda-6.0.5, FDb.InfiniumMethylation.hg19-2.2.0, fds-1.8, feature-1.2.15, fgsea-1.24.0, filelock-1.0.2, flowAI-1.28.0, flowClean-1.36.0, flowClust-3.36.0, flowCore-2.10.0, flowDensity-1.32.0, flowFP-1.56.3, flowMerge-2.46.0, flowPeaks-1.44.0, FlowSOM-2.6.0, FlowSorted.Blood.EPIC-2.2.0, FlowSorted.CordBloodCombined.450k-1.14.0, flowStats-4.10.0, flowViz-1.62.0, flowWorkspace-4.10.1, FRASER-1.10.2, fresh-0.2.0, gcrma-2.70.0, gdsfmt-1.34.0, genefilter-1.80.3, geneLenDataBase-1.34.0, geneplotter-1.76.0, GENESIS-2.28.0, GENIE3-1.20.0, GenomeInfoDb-1.34.9, GenomeInfoDbData-1.2.9, GenomicAlignments-1.34.1, GenomicFeatures-1.50.4, GenomicFiles-1.34.0, GenomicRanges-1.50.2, GenomicScores-2.10.0, GEOmap-2.5-0, GEOquery-2.66.0, ggbio-1.46.0, ggcyto-1.26.4, ggdendro-0.1.23, ggnewscale-0.4.8, ggpointdensity-0.1.0, ggrastr-1.0.1, ggseqlogo-0.1, ggthemes-4.2.4, ggtree-3.6.2, GLAD-2.62.0, Glimma-2.8.0, GlobalAncova-4.16.0, globaltest-5.52.0, GO.db-3.16.0, GOSemSim-2.24.0, goseq-1.50.0, GOstats-2.64.0, graph-1.76.0, graphite-1.44.0, GSEABase-1.60.0, gsmoothr-0.1.7, gson-0.1.0, GSVA-1.46.0, Gviz-1.42.1, GWASExactHW-1.01, GWASTools-1.44.0, HDF5Array-1.26.0, HDO.db-0.99.1, hdrcde-3.4, heatmaply-1.4.2, hgu133plus2.db-3.13.0, HiCBricks-1.16.0, HiCcompare-1.20.0, HMMcopy-1.40.0, Homo.sapiens-1.3.1, IHW-1.26.0, IlluminaHumanMethylation450kanno.ilmn12.hg19-0.6.1, IlluminaHumanMethylation450kmanifest-0.4.0, IlluminaHumanMethylationEPICanno.ilm10b2.hg19-0.6.0, IlluminaHumanMethylationEPICanno.ilm10b4.hg19-0.6.0, IlluminaHumanMethylationEPICmanifest-0.3.0, illuminaio-0.40.0, impute-1.72.3, InteractionSet-1.26.1, interactiveDisplayBase-1.36.0, intervals-0.15.4, IRanges-2.32.0, isva-1.9, JASPAR2020-0.99.10, KEGGgraph-1.58.3, KEGGREST-1.38.0, LEA-3.10.2, limma-3.54.2, log4r-0.4.3, lpsymphony-1.26.3, lsa-0.73.3, lumi-2.50.0, M3Drop-1.24.0, marray-1.76.0, maSigPro-1.70.0, MassSpecWavelet-1.64.1, MatrixGenerics-1.10.0, MBA-0.1-0, MEDIPS-1.50.0, metagenomeSeq-1.40.0, metaMA-3.1.3, metap-1.8, metapod-1.6.0, MethylSeekR-1.38.0, methylumi-2.44.0, Mfuzz-2.58.0, mia-1.6.0, minfi-1.44.0, missMethyl-1.32.0, mixOmics-6.22.0, mixsqp-0.3-48, MLInterfaces-1.78.0, MotifDb-1.40.0, motifmatchr-1.20.0, motifStack-1.42.0, MsCoreUtils-1.10.0, MsFeatures-1.6.0, msigdbr-7.5.1, MSnbase-2.24.2, MSstats-4.6.5, MSstatsConvert-1.8.3, MSstatsLiP-1.4.1, MSstatsPTM-2.0.3, MSstatsTMT-2.6.1, MultiAssayExperiment-1.24.0, MultiDataSet-1.26.0, multtest-2.54.0, muscat-1.12.1, mutoss-0.1-13, mzID-1.36.0, mzR-2.32.0, NADA-1.6-1.1, ncdfFlow-2.44.0, NMF-0.25, NOISeq-2.42.0, numbat-1.2.2, oligo-1.62.2, oligoClasses-1.60.0, ontologyIndex-2.10, oompaBase-3.2.9, oompaData-3.1.3, openCyto-2.10.1, org.Hs.eg.db-3.16.0, org.Mm.eg.db-3.16.0, org.Rn.eg.db-3.16.0, OrganismDbi-1.40.0, OUTRIDER-1.16.3, pathview-1.38.0, pcaMethods-1.90.0, perm-1.0-0.2, PFAM.db-3.16.0, phyloseq-1.42.0, pmp-1.10.0, polyester-1.34.0, poweRlaw-0.70.6, preprocessCore-1.60.2, pRoloc-1.38.2, pRolocdata-1.36.0, pRolocGUI-2.8.0, ProtGenerics-1.30.0, PRROC-1.3.1, PSCBS-0.66.0, PureCN-2.4.0, qap-0.1-2, QDNAseq-1.34.0, qlcMatrix-0.9.7, qqconf-1.3.1, quantsmooth-1.64.0, qvalue-2.30.0, R.devices-2.17.1, R.filesets-2.15.0, R.huge-0.9.0, rainbow-3.7, randomcoloR-1.1.0.1, rARPACK-0.11-0, RBGL-1.74.0, RcisTarget-1.18.2, RcppAnnoy-0.0.20, RcppHNSW-0.4.1, RcppML-0.3.7, RcppZiggurat-0.1.6, reactome.db-1.82.0, ReactomePA-1.42.0, regioneR-1.30.0, reldist-1.7-2, remaCor-0.0.11, Repitools-1.44.0, ReportingTools-2.38.0, ResidualMatrix-1.8.0, restfulr-0.0.15, Rfast-2.0.7, RFOC-3.4-6, rGADEM-2.46.0, Rgraphviz-2.42.0, rhdf5-2.42.0, rhdf5filters-1.10.0, Rhdf5lib-1.20.0, Rhtslib-2.0.0, Ringo-1.62.0, RNASeqPower-1.38.0, RnBeads-2.16.0, RnBeads.hg19-1.30.0, RnBeads.hg38-1.30.0, RnBeads.mm10-2.6.0, RnBeads.mm9-1.30.0, RnBeads.rn5-1.30.0, ROC-1.74.0, rols-2.26.0, ROntoTools-2.26.0, ropls-1.30.0, RPMG-2.2-3, RProtoBufLib-2.10.0, Rsamtools-2.14.0, RSEIS-4.1-4, Rsubread-2.12.3, rsvd-1.0.5, rtracklayer-1.58.0, Rwave-2.6-5, S4Vectors-0.36.2, samr-3.0, SamSPECTRAL-1.52.0, SC3-1.26.2, ScaledMatrix-1.6.0, SCANVIS-1.12.0, scater-1.26.1, scattermore-0.8, scDblFinder-1.12.0, scistreer-1.1.0, scran-1.26.2, scrime-1.3.5, scuttle-1.8.4, SeqArray-1.38.0, seqLogo-1.64.0, SeqVarTools-1.36.0, seriation-1.4.2, Seurat-4.3.0, SeuratObject-4.1.3, shinyBS-0.61.1, shinydashboardPlus-2.0.3, shinyFiles-0.9.3, shinyhelper-0.3.2, shinypanel-0.1.5, shinyWidgets-0.7.6, ShortRead-1.56.1, siggenes-1.72.0, Signac-1.9.0, simplifyEnrichment-1.8.0, SingleCellExperiment-1.20.0, SingleR-2.0.0, sitmo-2.0.2, slingshot-2.6.0, SMVar-1.3.4, SNPRelate-1.32.2, snpStats-1.48.0, sparseMatrixStats-1.10.0, sparsesvd-0.2-2, SpatialExperiment-1.8.1, SPIA-2.50.0, splancs-2.01-43, SPOTlight-1.2.0, stageR-1.20.0, struct-1.10.0, structToolbox-1.10.1, SummarizedExperiment-1.28.0, susieR-0.12.35, sva-3.46.0, TailRank-3.2.2, TFBSTools-1.36.0, TFMPvalue-0.0.9, tkWidgets-1.76.0, TrajectoryUtils-1.6.0, treeio-1.22.0, TreeSummarizedExperiment-2.6.0, TSP-1.2-3, TxDb.Hsapiens.UCSC.hg19.knownGene-3.2.2, TxDb.Mmusculus.UCSC.mm10.knownGene-3.10.0, tximport-1.26.1, UCell-2.2.0, uwot-0.1.14, variancePartition-1.28.7, VariantAnnotation-1.44.1, venn-1.11, vsn-3.66.0, waiter-0.2.5, wateRmelon-2.4.0, WGCNA-1.72-1, widgetTools-1.76.0, Wrench-1.16.0, xcms-3.20.0, XVector-0.38.0, zCompositions-1.4.0-1, zellkonverter-1.8.0, zlibbioc-1.44.0

"},{"location":"available_software/detail/R-bundle-CRAN/","title":"R-bundle-CRAN","text":"

Bundle of R packages from CRAN

https://www.r-project.org/

"},{"location":"available_software/detail/R-bundle-CRAN/#available-modules","title":"Available modules","text":"

The overview below shows which R-bundle-CRAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R-bundle-CRAN, load one of these modules using a module load command like:

module load R-bundle-CRAN/2024.06-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R-bundle-CRAN/2024.06-foss-2023b x x x x x x x x x R-bundle-CRAN/2023.12-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/R-bundle-CRAN/#r-bundle-cran202406-foss-2023b","title":"R-bundle-CRAN/2024.06-foss-2023b","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.1, abe-3.0.1, abind-1.4-5, acepack-1.4.2, adabag-5.0, ade4-1.7-22, ADGofTest-0.3, admisc-0.35, aggregation-1.0.1, AICcmodavg-2.3-3, akima-0.6-3.4, alabama-2023.1.0, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-2.2.1, animation-2.7, aod-1.3.3, apcluster-1.4.13, ape-5.8, aplot-0.2.3, argparse-2.2.3, aricode-1.0.3, arm-1.14-4, arrayhelpers-1.1-0, asnipe-1.1.17, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-4, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-11, aws-2.5-5, awsMethods-1.1-1, backports-1.5.0, bacr-1.0.1, bartMachine-1.3.4.1, bartMachineJARs-1.2.1, base64-2.0.1, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-6, BayesPen-1.0, bayesplot-1.11.1, bayestestR-0.14.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25.1, BCEE-1.3.2, BDgraph-2.72, bdsmatrix-1.3-7, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.5, betareg-3.1-4, BH-1.84.0-0, BiasedUrn-2.0.12, bibtex-0.5.1, BIEN-1.2.6, bigD-0.2.0, BIGL-1.9.1, bigmemory-4.6.4, bigmemory.sri-0.1.8, bindr-0.1.1, bindrcpp-0.2.3, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-5-2, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.5-5, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.9.4, bold-1.3.0, boot-1.3-30, bootstrap-2019.6, Boruta-8.0.0, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brms-2.21.0, Brobdingnag-1.2-9, broom-1.0.6, broom.helpers-1.15.0, broom.mixed-0.2.9.5, bst-0.3-24, Cairo-1.6-2, calibrate-1.7.7, car-3.1-2, carData-3.0-5, caret-6.0-94, catlearn-1.0, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.3.0, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.3.1, chemometrics-1.4.4, chk-0.9.1, chkptstanr-0.1.1, chron-2.3-61, circlize-0.4.16, circular-0.5-0, class-7.3-22, classInt-0.4-10, cld2-1.2.4, clisymbols-1.2.0, clock-0.7.0, clue-0.3-65, cluster-2.1.6, clusterGeneration-1.3.8, clusterRepro-0.9, clustree-0.5.1, clValid-0.7, cmna-1.0.5, cmprsk-2.2-12, cNORM-3.0.4, cobalt-4.5.5, cobs-1.3-8, coda-0.19-4.1, codetools-0.2-20, coin-1.4-3, collapse-2.0.14, colorspace-2.1-0, colourpicker-1.3.0, combinat-0.0-8, ComICS-1.0.4, ComplexUpset-1.3.3, compositions-2.0-8, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, ConsRank-2.1.4, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-3, corpcor-1.6.10, corrplot-0.92, covr-3.6.4, CovSel-1.2.1, covsim-1.1.0, cowplot-1.1.3, coxed-0.3.3, coxme-2.2-20, crfsuite-0.4.2, crosstalk-1.2.1, crul-1.4.2, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.1.0, cubelyr-1.0.2, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-13, d3Network-0.5.2.1, dagitty-0.3-4, data.table-1.15.4, data.tree-1.1.0, DataCombine-0.2.21, datawizard-0.12.2, date-1.2-42, dbarts-0.9-28, DBI-1.2.3, dbplyr-2.5.0, dbscan-1.1-12, dcurver-0.9.2, ddalpha-1.3.15, deal-1.2-42, debugme-1.2.0, deldir-2.0-4, dendextend-1.17.1, DEoptim-2.2-8, DEoptimR-1.1-3, DepthProc-2.1.5, Deriv-4.1.3, DescTools-0.99.54, deSolve-1.40, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.11, DiceKriging-1.6.0, dichromat-2.0-0.1, dimRed-0.2.6, diptest-0.77-1, DiscriMiner-0.1-29, dismo-1.3-14, distillery-1.2-1, distr-2.9.3, distrEx-2.9.2, distributional-0.4.0, DistributionUtils-0.6-1, diveRsity-1.9.90, dlm-1.1-6, DMCfun-3.5.4, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.1-1, downloader-0.4, dplyr-1.1.4, dr-3.0.10, dreamerr-1.4.0, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.33, dtangle-2.0.9, dtplyr-1.3.1, DTRreg-2.2, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-14, earth-5.3.3, EasyABC-1.5.2, ECOSolveR-0.5.5, ellipse-0.5.0, elliptic-1.4-0, emdbook-1.3.13, emmeans-1.10.2, emoa-0.5-2, emulator-1.2-24, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.8.1, epitools-0.5-10.1, ergm-4.6.0, ergm.count-4.1.2, ergm.multi-0.2.1, estimability-1.5.1, EValue-4.1.3, evd-2.3-7, Exact-3.2, expm-0.999-9, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-4, FactoMineR-2.11, FactorCopula-0.9.3, fail-1.3, farver-2.1.2, fastcluster-1.2.6, fastDummies-1.7.3, fasterize-1.0.5, fastICA-1.2-4, fastmatch-1.1-4, fdrtool-1.2.17, feather-0.3.5, ff-4.0.12, fftw-1.0-8, fftwtools-0.9-11, fields-15.2, filehash-2.4-5, finalfit-1.0.7, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-11, fixest-0.12.1, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-2, flexmix-2.3-19, flextable-0.9.6, fma-2.5, FME-1.3.6.3, fmri-1.9.12, FNN-1.1.4, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.23.0, foreign-0.8-86, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-12, fpp-0.5, fracdiff-1.5-3, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.33.2, future.apply-1.11.2, gam-1.22-3, gamlss-5.4-22, gamlss.data-6.0-6, gamlss.dist-6.1-1, gamlss.tr-5.1-9, gamm4-0.2-6, gap-1.5-3, gap.datasets-0.0.6, gapfill-0.9.6-1, gargle-1.5.2, gaussquad-1.0-3, gbm-2.1.9, gbRd-0.4.12, gclus-1.3.2, gdalUtils-2.0.3.2, gdata-3.0.0, gdistance-1.6.4, gdtools-0.3.7, gee-4.13-27, geeM-0.10.1, geepack-1.3.11, geex-1.1.1, geiger-2.0.11, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.14, geojsonsf-2.0.3, geometries-0.2.4, geometry-0.4.7, getopt-1.20.4, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.2.1, ggbeeswarm-0.7.2, ggdag-0.2.12, ggdist-3.3.2, ggExtra-0.10.1, ggfan-0.1.3, ggforce-0.4.2, ggformula-0.12.0, ggfun-0.1.5, ggh4x-0.2.8, ggnetwork-0.5.13, ggplot2-3.5.1, ggplotify-0.1.2, ggpubr-0.6.0, ggraph-2.2.1, ggrepel-0.9.5, ggridges-0.5.6, ggsci-3.2.0, ggsignif-0.6.4, ggstance-0.3.7, ggstats-0.6.0, ggvenn-0.1.10, ggvis-0.4.9, GillespieSSA-0.6.2, git2r-0.33.0, GJRM-0.2-6.5, glasso-1.11, gld-2.6.6, gllvm-1.4.3, glmmML-1.1.6, glmmTMB-1.1.9, glmnet-4.1-8, GlobalOptions-0.1.2, globals-0.16.3, gmm-1.8, gmodels-2.19.1, gmp-0.7-4, gnumeric-0.7-10, goftest-1.2-3, gomms-1.0, googledrive-2.1.1, googlesheets4-1.1.1, gower-1.0.1, GPArotation-2024.3-1, gplots-3.1.3.1, graphlayouts-1.1.1, grf-2.3.2, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.3-1, grpreg-3.4.0, GSA-1.03.3, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.10.1, gtable-0.3.5, gtools-3.9.5, gtsummary-1.7.2, GUTS-1.2.5, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.44.0.3, hal9001-0.4.6, haldensify-0.2.3, hardhat-1.4.0, harmony-1.2.0, hash-2.2.6.3, haven-2.5.4, hdf5r-1.3.10, hdm-0.3.2, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.3, HGNChelper-0.8.14, HiddenMarkov-1.8-13, Hmisc-5.1-3, hms-1.1.3, Hmsc-3.0-13, htmlTable-2.4.2, httpcode-0.3.0, huge-1.3.5, hunspell-3.0.3, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.21, idr-1.3, ids-1.0.1, ie2misc-0.9.1, igraph-2.0.3, image.binarization-0.1.3, imager-1.0.2, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.5, infotheo-1.2.0.1, inline-0.3.19, insight-0.20.3, intergraph-2.0-4, interp-1.1-6, interpretR-0.2.5, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-21, isoband-0.2.7, ISOcodes-2024.02.12, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-4, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-6, jpeg-0.1-10, jsonify-1.2.2, jstable-1.2.6, juicyjuice-0.1.0, kde1d-1.0.7, kedd-1.0.4, kernlab-0.9-32, KernSmooth-2.23-24, kinship2-1.9.6.1, klaR-1.7-3, KODAMA-2.4, kohonen-3.0.12, ks-1.14.2, labdsv-2.1-0, labeling-0.4.3, labelled-2.13.0, laeken-0.5.3, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, lattice-0.22-6, latticeExtra-0.6-30, lava-1.8.0, lavaan-0.6-18, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.5.2, ldbounds-2.0.2, leafem-0.2.3, leaflet-2.2.2, leaflet.providers-2.0.0, leafsync-0.1.0, leaps-3.2, LearnBayes-2.15.1, leiden-0.4.3.1, lhs-1.1.6, libcoin-1.0-10, limSolve-1.5.7.1, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.1, lme4-1.1-35.4, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-3.0, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.10, logcondens-2.1.8, logger-0.3.0, logistf-1.26.0, logspline-2.1.22, longitudinal-1.1.13, longmemo-1.1-2, loo-2.7.0, lpSolve-5.6.20, lpSolveAPI-5.5.2.0-17.11, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.3, lwgeom-0.2-14, magic-1.6-1, magick-2.8.3, MALDIquant-1.22.2, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.2, maptools-1.1-8, markdown-1.13, MASS-7.3-61, Matching-4.10-14, MatchIt-4.5.5, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.7-0, matrixcalc-1.0-6, MatrixModels-0.5-3, matrixStats-1.3.0, maxLik-1.5-2.1, maxlike-0.1-11, maxnet-0.1.4, mboost-2.9-10, mclogit-0.9.6, mclust-6.1.1, mcmc-0.9-8, MCMCpack-1.7-0, mcmcse-1.5-0, mda-0.5-4, medflex-0.6-10, mediation-4.5.0, memisc-0.99.31.7, memuse-4.2-3, MESS-0.5.12, metadat-1.2-0, metafor-4.6-0, MetaUtility-2.1.2, mets-1.3.4, mgcv-1.9-1, mgsub-1.7.3, mhsmm-0.4.21, mi-1.1, mice-3.16.0, miceadds-3.17-44, microbenchmark-1.4.10, MIIVsem-0.5.8, minerva-1.5.10, minpack.lm-1.2-4, minqa-1.2.7, minty-0.0.1, mirt-1.41, misc3d-0.9-1, miscTools-0.6-28, missForest-1.5, missMDA-1.19, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-5, mlegp-3.1.9, MLmetrics-1.1.3, mlogit-1.1-1, mlr-2.19.2, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.11, modeltools-0.2-23, momentfit-0.5, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.4.0, mpath-0.4-2.25, mRMRe-2.1.2.1, msm-1.7.1, mstate-0.3.2, multcomp-1.4-25, multcompView-0.1-10, multicool-1.0.1, multipol-1.0-9, multitaper-1.0-17, munsell-0.5.1, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.2-5, nabor-0.5.0, naniar-1.1.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.22, NCmisc-1.2.0, network-1.18.2, networkDynamic-0.11.4, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-14, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.5, nlme-3.1-165, nloptr-2.1.0, NLP-0.2-1, nlsem-0.8-1, nnet-7.3-19, nnls-1.5, nonnest2-0.5-7, nor1mix-1.3-3, norm-1.0-11.1, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.8-2, OceanView-1.0.7, oddsratio-2.0.1, officer-0.6.6, openair-2.18-2, OpenMx-2.21.11, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2023-10.21, optmatch-0.10.7, optparse-1.7.5, ordinal-2023.12-4, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.3.3, outliers-0.15, packrat-0.9.2, pacman-0.5.1, pammtools-0.5.93, pamr-1.56.2, pan-1.9, parallelDist-0.2.6, parallelly-1.37.1, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-15, partykit-1.2-20, pastecs-1.4.2, patchwork-1.2.0, pbapply-1.7-2, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-4, pdp-0.8.1, PearsonDS-1.3.1, pec-2023.04.12, penalized-0.9-52, penfa-0.1.1, peperr-1.5, performance-0.12.2, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.12, phytools-2.3-0, pim-2.0.2, pinfsc50-1.3.0, pixmap-0.4-13, pkgmaker-0.32.10, PKI-0.1-14, plogr-0.2.0, plot3D-1.4.1, plot3Drgl-1.0.4, plotly-4.10.4, plotmo-3.6.3, plotrix-3.8-4, pls-2.8-3, plyr-1.8.9, PMA-1.2-3, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.25, Polychrome-1.5.1, polyclip-1.10-6, polycor-0.8-1, polynom-1.4-1, posterior-1.5.0, ppcor-1.1, prabclus-2.3-3, pracma-2.4.4, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, princurve-2.1.6, pROC-1.18.5, prodlim-2023.08.28, profileModel-0.6.1, proftools-0.99-3, progress-1.2.3, progressr-0.14.0, projpred-2.8.0, proto-1.0.0, proxy-0.4-27, proxyC-0.4.1, pryr-0.1.6, pscl-1.5.9, pspline-1.0-20, psych-2.4.3, Publish-2023.01.17, pulsar-0.3.11, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.8, qqman-0.1.9, qrnn-2.1.1, quadprog-1.5-8, quanteda-4.0.2, quantmod-0.4.26, quantreg-5.98, questionr-0.7.8, QuickJSR-1.2.2, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.26.0, R.rsp-0.46.0, R.utils-2.12.3, R2WinBUGS-2.1-22.1, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.3, randtoolbox-2.0.4, rangeModelMetadata-0.1.5, ranger-0.16.0, RANN-2.6.1, rapidjsonr-1.2.0, rARPACK-0.11-0, raster-3.6-26, rasterVis-0.51.6, ratelimitr-0.4.1, RBesT-1.7-3, rbibutils-2.2.16, rbison-1.0.0, Rborist-0.3-7, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, RColorBrewer-1.1-3, RcppArmadillo-0.12.8.4.0, RcppEigen-0.3.4.0.0, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.7, RcppTOML-0.2.2, RCurl-1.98-1.14, rda-1.2-1, Rdpack-2.6, rdrop2-0.8.2.1, reactable-0.4.4, reactR-0.5.0, readbitmap-0.1.5, reader-1.0.6, readODS-2.3.0, readr-2.1.5, readxl-1.4.3, rebird-1.3.0, recipes-1.0.10, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.5, relsurv-2.2-9, rematch-2.0.0, rentrez-1.2.3, renv-1.0.7, reprex-2.1.0, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.38.0, rex-1.2.1, rgbif-3.8.0, RGCCA-3.0.3, rgdal-1.6-7, rgeos-0.6-4, rgexf-0.16.2, rgl-1.3.1, Rglpk-0.6-5.1, rhandsontable-0.3.8, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.8, RInside-0.2.18, rio-1.1.1, riskRegression-2023.12.21, ritis-1.0.0, RItools-0.3-4, rJava-1.0-11, rjson-0.2.21, RJSONIO-1.3-1.9, rle-0.9.2, rlecuyer-0.3-8, rlemon-0.2.1, rlist-0.4.6.2, rmeta-3.0, Rmpfr-0.9-5, rms-6.8-1, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.7.0, robustbase-0.99-2, ROCR-1.0-11, ROI-1.0-1, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.4, roptim-0.1.6, rotl-3.1.0, rpact-4.0.0, rpart-4.1.23, rpf-1.0.14, RPMM-1.25, RPostgreSQL-0.7-6, rrcov-1.7-5, rredlist-0.7.1, rsample-1.2.1, rsconnect-1.3.1, Rserve-1.8-13, RSNNS-0.4-17, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.7, Rssa-1.0.5, rstan-2.32.6, rstantools-2.4.0, rstatix-0.7.2, rtdists-0.11-5, Rtsne-0.17, Rttf2pt1-1.3.12, RUnit-0.4.33, ruv-0.9.7.1, rvertnet-0.8.4, rvest-1.0.4, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.6, sampling-2.10, sandwich-3.1-0, SBdecomp-1.2, scales-1.3.0, scam-1.2-17, scatterpie-0.2.3, scatterplot3d-0.3-44, scs-3.2.4, sctransform-0.4.1, SDMTools-1.1-221.2, seewave-2.2.3, segmented-2.1-0, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-36, servr-0.30, setRNG-2024.2-1, sf-1.0-16, sfheaders-0.4.4, sfsmisc-1.1-18, shadowtext-0.1.3, shape-1.4.6.1, shapefiles-0.7.2, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-1.8-0, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.1, sm-2.2-6.0, smoof-1.6.0.3, smoother-1.3, sn-2.1.1, sna-2.7-2, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.1, snowfall-1.84-6.3, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sp-2.1-4, spaa-0.2.2, spam-2.10-0, spaMM-4.5.0, SparseM-1.83, SPAtest-3.1.2, spatial-7.3-17, spatstat-3.0-8, spatstat.core-2.4-4, spatstat.data-3.1-2, spatstat.explore-3.2-7, spatstat.geom-3.2-9, spatstat.linnet-3.1-5, spatstat.model-3.2-11, spatstat.random-3.2-3, spatstat.sparse-3.1-0, spatstat.utils-3.0-5, spData-2.3.1, spdep-1.3-5, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.3, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.32.9, stargazer-5.2.3, stars-0.6-5, startupmsg-0.9.6.1, StatMatch-1.4.2, statmod-1.5.0, statnet-2019.6, statnet.common-4.9.0, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.12, stringmagic-1.1.2, strucchange-1.5-3, styler-1.10.3, subplex-1.8, SuperLearner-2.0-29, SuppDists-1.1-9.7, survey-4.4-2, survival-3.7-0, survivalROC-1.0.3.1, svd-0.5.5, svglite-2.1.3, svUnit-1.0.6, swagger-5.17.14, symmoments-1.2.1, tableone-0.13.2, tabletools-0.1.0, tau-0.0-25, taxize-0.9.100, tcltk2-1.2-11, tclust-2.0-4, TeachingDemos-2.13, tensor-1.5, tensorA-0.36.2.1, tergm-4.2.0, terra-1.7-78, testit-0.13, textcat-1.0-8, textplot-0.2.2, TFisher-0.2.0, TH.data-1.1-2, threejs-0.3.3, tictoc-1.2.1, tidybayes-3.0.6, tidygraph-1.3.1, tidyr-1.3.1, tidyselect-1.2.1, tidytext-0.4.2, tidytree-0.4.6, tidyverse-2.0.0, tiff-0.1-12, timechange-0.3.0, timeDate-4032.109, timereg-2.0.5, tkrplot-0.0-27, tm-0.7-13, tmap-3.3-4, tmaptools-3.1-1, TMB-1.9.12, tmle-2.0.1.1, tmvnsim-1.0-2, tmvtnorm-1.6, tokenizers-0.3.0, topicmodels-0.2-16, TraMineR-2.2-10, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-9, trust-0.1-8, tseries-0.10-56, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.4, tuneR-1.4.7, twang-2.6, tweedie-2.3.5, tweenr-2.0.3, tzdb-0.4.0, ucminf-1.2.1, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-1.0.0, uniqueAtomMat-0.1-3-2, units-0.8-5, unmarked-1.4.1, UpSetR-1.4.0, urca-1.3-4, urltools-1.7.3, uroot-2.1-3, uuid-1.2-0, V8-4.4.2, varhandle-2.0.6, vcd-1.4-12, vcfR-1.15.0, vegan-2.6-6.1, VennDiagram-1.7.3, VGAM-1.1-11, VIM-6.2.2, VineCopula-2.5.0, vioplot-0.4.0, vipor-0.4.7, viridis-0.6.5, viridisLite-0.4.2, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.5, VSURF-1.2.0, warp-0.2.1, waveslim-1.8.5, wdm-0.2.4, webshot-0.5.5, webutils-1.2.0, weights-1.0.4, WeightSVM-1.7-13, wellknown-0.7.4, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.7.1, wikitaxa-0.4.0, wk-0.9.1, word2vec-0.4.0, wordcloud-2.6, worrms-0.4.3, writexl-1.5.0, WriteXLS-6.6.0, XBRL-0.99.19.1, xgboost-1.7.7.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.16.1, xts-0.14.0, yaImpute-1.0-34, yulab.utils-0.1.4, zeallot-0.1.0, zoo-1.8-12

"},{"location":"available_software/detail/R-bundle-CRAN/#r-bundle-cran202312-foss-2023a","title":"R-bundle-CRAN/2023.12-foss-2023a","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.0, abe-3.0.1, abind-1.4-5, acepack-1.4.2, adabag-5.0, ade4-1.7-22, ADGofTest-0.3, admisc-0.34, aggregation-1.0.1, AICcmodavg-2.3-3, akima-0.6-3.4, alabama-2023.1.0, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-1.0.1, animation-2.7, aod-1.3.2, apcluster-1.4.11, ape-5.7-1, aplot-0.2.2, argparse-2.2.2, aricode-1.0.3, arm-1.13-1, arrayhelpers-1.1-0, asnipe-1.1.17, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-4, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-11, aws-2.5-3, awsMethods-1.1-1, backports-1.4.1, bacr-1.0.1, bartMachine-1.3.4.1, bartMachineJARs-1.2.1, base64-2.0.1, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-6, BayesPen-1.0, bayesplot-1.10.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25.1, BCEE-1.3.2, BDgraph-2.72, bdsmatrix-1.3-6, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.0, betareg-3.1-4, BH-1.81.0-1, BiasedUrn-2.0.11, bibtex-0.5.1, BIEN-1.2.6, bigD-0.2.0, BIGL-1.8.0, bigmemory-4.6.1, bigmemory.sri-0.1.6, bindr-0.1.1, bindrcpp-0.2.2, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-4, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.5-2, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.9.1, bold-1.3.0, boot-1.3-28.1, bootstrap-2019.6, Boruta-8.0.0, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brms-2.20.4, Brobdingnag-1.2-9, broom-1.0.5, broom.helpers-1.14.0, broom.mixed-0.2.9.4, bst-0.3-24, Cairo-1.6-2, calibrate-1.7.7, car-3.1-2, carData-3.0-5, caret-6.0-94, catlearn-1.0, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.3.0, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.3.1, chemometrics-1.4.4, chk-0.9.1, chkptstanr-0.1.1, chron-2.3-61, circlize-0.4.15, circular-0.5-0, class-7.3-22, classInt-0.4-10, cld2-1.2.4, clisymbols-1.2.0, clock-0.7.0, clue-0.3-65, cluster-2.1.6, clusterGeneration-1.3.8, clusterRepro-0.9, clustree-0.5.1, clValid-0.7, cmprsk-2.2-11, cNORM-3.0.4, cobalt-4.5.2, cobs-1.3-5, coda-0.19-4, codetools-0.2-19, coin-1.4-3, collapse-2.0.7, colorspace-2.1-0, colourpicker-1.3.0, combinat-0.0-8, ComICS-1.0.4, ComplexUpset-1.3.3, compositions-2.0-6, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, ConsRank-2.1.3, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-3, corpcor-1.6.10, corrplot-0.92, covr-3.6.4, CovSel-1.2.1, covsim-1.0.0, cowplot-1.1.1, coxed-0.3.3, coxme-2.2-18.1, crfsuite-0.4.2, crosstalk-1.2.1, crul-1.4.0, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.1.0, cubelyr-1.0.2, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-11, d3Network-0.5.2.1, dagitty-0.3-4, data.table-1.14.10, data.tree-1.1.0, DataCombine-0.2.21, date-1.2-42, dbarts-0.9-25, DBI-1.1.3, dbplyr-2.4.0, dbscan-1.1-12, dcurver-0.9.2, ddalpha-1.3.13, deal-1.2-42, debugme-1.1.0, deldir-2.0-2, dendextend-1.17.1, DEoptim-2.2-8, DEoptimR-1.1-3, DepthProc-2.1.5, Deriv-4.1.3, DescTools-0.99.52, deSolve-1.40, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.10, DiceKriging-1.6.0, dichromat-2.0-0.1, dimRed-0.2.6, diptest-0.77-0, DiscriMiner-0.1-29, dismo-1.3-14, distillery-1.2-1, distr-2.9.2, distrEx-2.9.0, distributional-0.3.2, DistributionUtils-0.6-1, diveRsity-1.9.90, dlm-1.1-6, DMCfun-2.0.2, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.1-1, downloader-0.4, dplyr-1.1.4, dr-3.0.10, dreamerr-1.4.0, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.31, dtangle-2.0.9, dtplyr-1.3.1, DTRreg-2.0, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-14, earth-5.3.2, EasyABC-1.5.2, ECOSolveR-0.5.5, ellipse-0.5.0, elliptic-1.4-0, emdbook-1.3.13, emmeans-1.8.9, emoa-0.5-0.2, emulator-1.2-21, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.8.1, epitools-0.5-10.1, ergm-4.5.0, ergm.count-4.1.1, ergm.multi-0.2.0, estimability-1.4.1, EValue-4.1.3, evd-2.3-6.1, Exact-3.2, expm-0.999-8, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-3, FactoMineR-2.9, FactorCopula-0.9.3, fail-1.3, farver-2.1.1, fastcluster-1.2.3, fastDummies-1.7.3, fasterize-1.0.5, fastICA-1.2-4, fastmatch-1.1-4, fdrtool-1.2.17, feather-0.3.5, ff-4.0.9, fftw-1.0-7, fftwtools-0.9-11, fields-15.2, filehash-2.4-5, finalfit-1.0.7, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-11, fixest-0.11.2, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-1, flexmix-2.3-19, flextable-0.9.4, fma-2.5, FME-1.3.6.3, fmri-1.9.12, FNN-1.1.3.2, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.21.1, foreign-0.8-86, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-10, fpp-0.5, fracdiff-1.5-2, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.33.0, future.apply-1.11.0, gam-1.22-3, gamlss-5.4-20, gamlss.data-6.0-2, gamlss.dist-6.1-1, gamlss.tr-5.1-7, gamm4-0.2-6, gap-1.5-3, gap.datasets-0.0.6, gapfill-0.9.6-1, gargle-1.5.2, gaussquad-1.0-3, gbm-2.1.8.1, gbRd-0.4-11, gclus-1.3.2, gdalUtils-2.0.3.2, gdata-3.0.0, gdistance-1.6.4, gdtools-0.3.5, gee-4.13-26, geeM-0.10.1, geepack-1.3.9, geex-1.1.1, geiger-2.0.11, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.10.1, geojsonsf-2.0.3, geometries-0.2.3, geometry-0.4.7, getopt-1.20.4, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.2.0, ggbeeswarm-0.7.2, ggdag-0.2.10, ggdist-3.3.1, ggExtra-0.10.1, ggfan-0.1.3, ggforce-0.4.1, ggformula-0.12.0, ggfun-0.1.3, ggh4x-0.2.6, ggnetwork-0.5.12, ggplot2-3.4.4, ggplotify-0.1.2, ggpubr-0.6.0, ggraph-2.1.0, ggrepel-0.9.4, ggridges-0.5.4, ggsci-3.0.0, ggsignif-0.6.4, ggstance-0.3.6, ggstats-0.5.1, ggvenn-0.1.10, ggvis-0.4.8, GillespieSSA-0.6.2, git2r-0.33.0, GJRM-0.2-6.4, glasso-1.11, gld-2.6.6, gllvm-1.4.3, glmmML-1.1.6, glmmTMB-1.1.8, glmnet-4.1-8, GlobalOptions-0.1.2, globals-0.16.2, gmm-1.8, gmodels-2.18.1.1, gmp-0.7-3, gnumeric-0.7-10, goftest-1.2-3, gomms-1.0, googledrive-2.1.1, googlesheets4-1.1.1, gower-1.0.1, GPArotation-2023.11-1, gplots-3.1.3, graphlayouts-1.0.2, grf-2.3.1, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.3-1, grpreg-3.4.0, GSA-1.03.2, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.10.0, gtable-0.3.4, gtools-3.9.5, gtsummary-1.7.2, GUTS-1.2.5, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.42.0.2, hal9001-0.4.6, haldensify-0.2.3, hardhat-1.3.0, harmony-1.2.0, hash-2.2.6.3, haven-2.5.4, hdf5r-1.3.8, hdm-0.3.1, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.3, HGNChelper-0.8.1, HiddenMarkov-1.8-13, Hmisc-5.1-1, hms-1.1.3, Hmsc-3.0-13, htmlTable-2.4.2, httpcode-0.3.0, huge-1.3.5, hunspell-3.0.3, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.20, idr-1.3, ids-1.0.1, ie2misc-0.9.1, igraph-1.5.1, image.binarization-0.1.3, imager-0.45.2, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.5, infotheo-1.2.0.1, inline-0.3.19, intergraph-2.0-3, interp-1.1-5, interpretR-0.2.5, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-21, isoband-0.2.7, ISOcodes-2023.12.07, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-4, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-6, jpeg-0.1-10, jsonify-1.2.2, jstable-1.1.3, juicyjuice-0.1.0, kde1d-1.0.5, kedd-1.0.3, kernlab-0.9-32, KernSmooth-2.23-22, kinship2-1.9.6, klaR-1.7-2, KODAMA-2.4, kohonen-3.0.12, ks-1.14.1, labdsv-2.1-0, labeling-0.4.3, labelled-2.12.0, laeken-0.5.2, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, lattice-0.22-5, latticeExtra-0.6-30, lava-1.7.3, lavaan-0.6-16, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.4.2, ldbounds-2.0.2, leafem-0.2.3, leaflet-2.2.1, leaflet.providers-2.0.0, leafsync-0.1.0, leaps-3.1, LearnBayes-2.15.1, leiden-0.4.3.1, lhs-1.1.6, libcoin-1.0-10, limSolve-1.5.7, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.0, lme4-1.1-35.1, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-3.0, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.8, logcondens-2.1.8, logger-0.2.2, logistf-1.26.0, logspline-2.1.21, longitudinal-1.1.13, longmemo-1.1-2, loo-2.6.0, lpSolve-5.6.19, lpSolveAPI-5.5.2.0-17.11, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.3, lwgeom-0.2-13, magic-1.6-1, magick-2.8.1, MALDIquant-1.22.1, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.1.1, maptools-1.1-8, markdown-1.12, MASS-7.3-60, Matching-4.10-14, MatchIt-4.5.5, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.6-4, matrixcalc-1.0-6, MatrixModels-0.5-3, matrixStats-1.1.0, maxLik-1.5-2, maxlike-0.1-10, maxnet-0.1.4, mboost-2.9-9, mclogit-0.9.6, mclust-6.0.1, mcmc-0.9-8, MCMCpack-1.6-3, mcmcse-1.5-0, mda-0.5-4, medflex-0.6-10, mediation-4.5.0, memisc-0.99.31.6, memuse-4.2-3, MESS-0.5.12, metadat-1.2-0, metafor-4.4-0, MetaUtility-2.1.2, mets-1.3.3, mgcv-1.9-0, mgsub-1.7.3, mhsmm-0.4.21, mi-1.1, mice-3.16.0, miceadds-3.16-18, microbenchmark-1.4.10, MIIVsem-0.5.8, minerva-1.5.10, minpack.lm-1.2-4, minqa-1.2.6, mirt-1.41, misc3d-0.9-1, miscTools-0.6-28, missForest-1.5, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-3.1, mlegp-3.1.9, MLmetrics-1.1.1, mlogit-1.1-1, mlr-2.19.1, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.11, modeltools-0.2-23, momentfit-0.5, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.4.0, mpath-0.4-2.23, mRMRe-2.1.2.1, msm-1.7.1, mstate-0.3.2, multcomp-1.4-25, multcompView-0.1-9, multicool-1.0.0, multipol-1.0-9, munsell-0.5.0, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.2-4, nabor-0.5.0, naniar-1.0.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.22, NCmisc-1.2.0, network-1.18.2, networkDynamic-0.11.3, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-14, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.5, nlme-3.1-164, nloptr-2.0.3, NLP-0.2-1, nlsem-0.8-1, nnet-7.3-19, nnls-1.5, nonnest2-0.5-6, nor1mix-1.3-2, norm-1.0-11.1, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.8-2, OceanView-1.0.6, oddsratio-2.0.1, officer-0.6.3, openair-2.18-0, OpenMx-2.21.11, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2023-10.21, optmatch-0.10.7, optparse-1.7.3, ordinal-2023.12-4, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.3.2, outliers-0.15, packrat-0.9.2, pacman-0.5.1, pammtools-0.5.92, pamr-1.56.1, pan-1.9, parallelDist-0.2.6, parallelly-1.36.0, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-14, partykit-1.2-20, pastecs-1.3.21, patchwork-1.1.3, pbapply-1.7-2, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-4, pdp-0.8.1, PearsonDS-1.3.0, pec-2023.04.12, penalized-0.9-52, penfa-0.1.1, peperr-1.5, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.10, phytools-2.0-3, pim-2.0.2, pinfsc50-1.3.0, pixmap-0.4-12, pkgmaker-0.32.10, plogr-0.2.0, plot3D-1.4, plot3Drgl-1.0.4, plotly-4.10.3, plotmo-3.6.2, plotrix-3.8-4, pls-2.8-3, plyr-1.8.9, PMA-1.2-2, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.24, Polychrome-1.5.1, polyclip-1.10-6, polycor-0.8-1, polynom-1.4-1, posterior-1.5.0, ppcor-1.1, prabclus-2.3-3, pracma-2.4.4, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, princurve-2.1.6, pROC-1.18.5, prodlim-2023.08.28, profileModel-0.6.1, proftools-0.99-3, progress-1.2.3, progressr-0.14.0, projpred-2.7.0, proto-1.0.0, proxy-0.4-27, proxyC-0.3.4, pryr-0.1.6, pscl-1.5.5.1, pspline-1.0-19, psych-2.3.9, Publish-2023.01.17, pulsar-0.3.11, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.8, qqman-0.1.9, qrnn-2.1, quadprog-1.5-8, quanteda-3.3.1, quantmod-0.4.25, quantreg-5.97, questionr-0.7.8, QuickJSR-1.0.8, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.25.0, R.rsp-0.45.0, R.utils-2.12.3, R2WinBUGS-2.1-21, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.3, randtoolbox-2.0.4, rangeModelMetadata-0.1.5, ranger-0.16.0, RANN-2.6.1, rapidjsonr-1.2.0, rARPACK-0.11-0, raster-3.6-26, rasterVis-0.51.6, ratelimitr-0.4.1, RBesT-1.7-2, rbibutils-2.2.16, rbison-1.0.0, Rborist-0.3-5, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, RColorBrewer-1.1-3, RcppArmadillo-0.12.6.6.1, RcppEigen-0.3.3.9.4, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.6, RcppTOML-0.2.2, RCurl-1.98-1.13, rda-1.2-1, Rdpack-2.6, rdrop2-0.8.2.1, reactable-0.4.4, reactR-0.5.0, readbitmap-0.1.5, reader-1.0.6, readODS-2.1.0, readr-2.1.4, readxl-1.4.3, rebird-1.3.0, recipes-1.0.8, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.5, relsurv-2.2-9, rematch-2.0.0, rentrez-1.2.3, renv-1.0.3, reprex-2.0.2, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.34.0, rex-1.2.1, rgbif-3.7.8, RGCCA-3.0.2, rgdal-1.6-7, rgeos-0.6-4, rgexf-0.16.2, rgl-1.2.8, Rglpk-0.6-5, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.7, RInside-0.2.18, rio-1.0.1, riskRegression-2023.09.08, ritis-1.0.0, RItools-0.3-3, rJava-1.0-10, rjson-0.2.21, RJSONIO-1.3-1.9, rle-0.9.2, rlecuyer-0.3-8, rlemon-0.2.1, rlist-0.4.6.2, rmeta-3.0, Rmpfr-0.9-4, rms-6.7-1, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.5.1, robustbase-0.99-1, ROCR-1.0-11, ROI-1.0-1, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.4, roptim-0.1.6, rotl-3.1.0, rpact-3.4.0, rpart-4.1.23, rpf-1.0.14, RPMM-1.25, RPostgreSQL-0.7-5, rrcov-1.7-4, rredlist-0.7.1, rsample-1.2.0, rsconnect-1.1.1, Rserve-1.8-13, RSNNS-0.4-17, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.4, Rssa-1.0.5, rstan-2.32.3, rstantools-2.3.1.1, rstatix-0.7.2, rtdists-0.11-5, Rtsne-0.17, Rttf2pt1-1.3.12, RUnit-0.4.32, ruv-0.9.7.1, rvertnet-0.8.2, rvest-1.0.3, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.4, sampling-2.10, sandwich-3.0-2, SBdecomp-1.2, scales-1.3.0, scam-1.2-14, scatterpie-0.2.1, scatterplot3d-0.3-44, scs-3.2.4, sctransform-0.4.1, SDMTools-1.1-221.2, seewave-2.2.3, segmented-2.0-0, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-36, servr-0.27, setRNG-2022.4-1, sf-1.0-14, sfheaders-0.4.3, sfsmisc-1.1-16, shadowtext-0.1.2, shape-1.4.6, shapefiles-0.7.2, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-1.8-0, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.1, sm-2.2-5.7.1, smoof-1.6.0.3, smoother-1.1, sn-2.1.1, sna-2.7-2, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.1, snowfall-1.84-6.3, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sp-2.1-2, spaa-0.2.2, spam-2.10-0, spaMM-4.4.0, SparseM-1.81, SPAtest-3.1.2, spatial-7.3-17, spatstat-3.0-7, spatstat.core-2.4-4, spatstat.data-3.0-3, spatstat.explore-3.2-5, spatstat.geom-3.2-7, spatstat.linnet-3.1-3, spatstat.model-3.2-8, spatstat.random-3.2-2, spatstat.sparse-3.0-3, spatstat.utils-3.0-4, spData-2.3.0, spdep-1.3-1, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.2, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.26.28, stargazer-5.2.3, stars-0.6-4, startupmsg-0.9.6, StatMatch-1.4.1, statmod-1.5.0, statnet-2019.6, statnet.common-4.9.0, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.12, stringmagic-1.0.0, strucchange-1.5-3, styler-1.10.2, subplex-1.8, SuperLearner-2.0-28.1, SuppDists-1.1-9.7, survey-4.2-1, survival-3.5-7, survivalROC-1.0.3.1, svd-0.5.5, svglite-2.1.3, svUnit-1.0.6, swagger-3.33.1, symmoments-1.2.1, tableone-0.13.2, tabletools-0.1.0, tau-0.0-25, taxize-0.9.100, tcltk2-1.2-11, tclust-1.5-5, TeachingDemos-2.12, tensor-1.5, tensorA-0.36.2, tergm-4.2.0, terra-1.7-55, testit-0.13, textcat-1.0-8, textplot-0.2.2, TFisher-0.2.0, TH.data-1.1-2, threejs-0.3.3, tictoc-1.2, tidybayes-3.0.6, tidygraph-1.2.3, tidyr-1.3.0, tidyselect-1.2.0, tidytext-0.4.1, tidytree-0.4.5, tidyverse-2.0.0, tiff-0.1-12, timechange-0.2.0, timeDate-4022.108, timereg-2.0.5, tkrplot-0.0-27, tm-0.7-11, tmap-3.3-4, tmaptools-3.1-1, TMB-1.9.9, tmle-2.0.0, tmvnsim-1.0-2, tmvtnorm-1.6, tokenizers-0.3.0, topicmodels-0.2-15, TraMineR-2.2-8, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-9, trust-0.1-8, tseries-0.10-55, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.4, tuneR-1.4.6, twang-2.6, tweedie-2.3.5, tweenr-2.0.2, tzdb-0.4.0, ucminf-1.2.0, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-0.9.0, uniqueAtomMat-0.1-3-2, units-0.8-5, unmarked-1.3.2, UpSetR-1.4.0, urca-1.3-3, urltools-1.7.3, uroot-2.1-2, uuid-1.1-1, V8-4.4.1, varhandle-2.0.6, vcd-1.4-11, vcfR-1.15.0, vegan-2.6-4, VennDiagram-1.7.3, VGAM-1.1-9, VIM-6.2.2, VineCopula-2.5.0, vioplot-0.4.0, vipor-0.4.5, viridis-0.6.4, viridisLite-0.4.2, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.5, VSURF-1.2.0, warp-0.2.1, waveslim-1.8.4, wdm-0.2.4, webshot-0.5.5, webutils-1.2.0, weights-1.0.4, WeightSVM-1.7-13, wellknown-0.7.4, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.5.0, wikitaxa-0.4.0, wk-0.9.1, word2vec-0.4.0, wordcloud-2.6, worrms-0.4.3, writexl-1.4.2, WriteXLS-6.4.0, xgboost-1.7.6.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.16, xts-0.13.1, yaImpute-1.0-33, yulab.utils-0.1.0, zeallot-0.1.0, zoo-1.8-12

"},{"location":"available_software/detail/R/","title":"R","text":"

R is a free software environment for statistical computing and graphics.

https://www.r-project.org/

"},{"location":"available_software/detail/R/#available-modules","title":"Available modules","text":"

The overview below shows which R installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R, load one of these modules using a module load command like:

module load R/4.4.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R/4.4.1-gfbf-2023b x x x x x x x x x R/4.3.2-gfbf-2023a x x x x x x x x x R/4.2.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/R/#r441-gfbf-2023b","title":"R/4.4.1-gfbf-2023b","text":"

This is a list of extensions included in the module:

askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-10, brio-1.1.5, bslib-0.7.0, cachem-1.1.0, callr-3.7.6, cli-3.6.3, clipr-0.8.0, commonmark-1.9.1, compiler, cpp11-0.4.7, crayon-1.5.3, credentials-2.0.1, curl-5.2.1, datasets, desc-1.4.3, devtools-2.4.5, diffobj-0.3.5, digest-0.6.36, downlit-0.4.4, ellipsis-0.3.2, evaluate-0.24.0, fansi-1.0.6, fastmap-1.2.0, fontawesome-0.5.2, fs-1.6.4, gert-2.0.1, gh-1.4.1, gitcreds-0.1.2, glue-1.7.0, graphics, grDevices, grid, highr-0.11, htmltools-0.5.8.1, htmlwidgets-1.6.4, httpuv-1.6.15, httr-1.4.7, httr2-1.0.1, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.8, knitr-1.47, later-1.3.2, lifecycle-1.0.4, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.2.0, parallel, pillar-1.9.0, pkgbuild-1.4.4, pkgconfig-2.0.3, pkgdown-2.0.9, pkgload-1.3.4, praise-1.0.0, prettyunits-1.2.0, processx-3.8.4, profvis-0.3.8, promises-1.3.0, ps-1.7.6, purrr-1.0.2, R6-2.5.1, ragg-1.3.2, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.12, rematch2-2.1.2, remotes-2.5.0, rlang-1.1.4, rmarkdown-2.27, roxygen2-7.3.1, rprojroot-2.0.4, rstudioapi-0.16.0, rversions-2.1.2, sass-0.4.9, sessioninfo-1.2.2, shiny-1.8.1.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.8.4, stringr-1.5.1, sys-3.4.2, systemfonts-1.1.0, tcltk, testthat-3.2.1.1, textshaping-0.4.0, tibble-3.2.1, tinytex-0.51, tools, urlchecker-1.0.1, usethis-2.2.3, utf8-1.2.4, utils, vctrs-0.6.5, waldo-0.5.2, whisker-0.4.1, withr-3.0.0, xfun-0.45, xml2-1.3.6, xopen-1.0.1, xtable-1.8-4, yaml-2.3.8, zip-2.3.1

"},{"location":"available_software/detail/R/#r432-gfbf-2023a","title":"R/4.3.2-gfbf-2023a","text":"

This is a list of extensions included in the module:

askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-8, brio-1.1.3, bslib-0.5.1, cachem-1.0.8, callr-3.7.3, cli-3.6.1, clipr-0.8.0, commonmark-1.9.0, compiler, cpp11-0.4.6, crayon-1.5.2, credentials-2.0.1, curl-5.1.0, datasets, desc-1.4.2, devtools-2.4.5, diffobj-0.3.5, digest-0.6.33, downlit-0.4.3, ellipsis-0.3.2, evaluate-0.23, fansi-1.0.5, fastmap-1.1.1, fontawesome-0.5.2, fs-1.6.3, gert-2.0.0, gh-1.4.0, gitcreds-0.1.2, glue-1.6.2, graphics, grDevices, grid, highr-0.10, htmltools-0.5.7, htmlwidgets-1.6.2, httpuv-1.6.12, httr-1.4.7, httr2-0.2.3, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.7, knitr-1.45, later-1.3.1, lifecycle-1.0.3, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.1.1, parallel, pillar-1.9.0, pkgbuild-1.4.2, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.3, praise-1.0.0, prettyunits-1.2.0, processx-3.8.2, profvis-0.3.8, promises-1.2.1, ps-1.7.5, purrr-1.0.2, R6-2.5.1, ragg-1.2.6, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.11, rematch2-2.1.2, remotes-2.4.2.1, rlang-1.1.2, rmarkdown-2.25, roxygen2-7.2.3, rprojroot-2.0.4, rstudioapi-0.15.0, rversions-2.1.2, sass-0.4.7, sessioninfo-1.2.2, shiny-1.7.5.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.7.12, stringr-1.5.0, sys-3.4.2, systemfonts-1.0.5, tcltk, testthat-3.2.0, textshaping-0.3.7, tibble-3.2.1, tinytex-0.48, tools, urlchecker-1.0.1, usethis-2.2.2, utf8-1.2.4, utils, vctrs-0.6.4, waldo-0.5.2, whisker-0.4.1, withr-2.5.2, xfun-0.41, xml2-1.3.5, xopen-1.0.0, xtable-1.8-4, yaml-2.3.7, zip-2.3.0

"},{"location":"available_software/detail/R/#r422-foss-2022b","title":"R/4.2.2-foss-2022b","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.0, abe-3.0.1, abind-1.4-5, acepack-1.4.1, adabag-4.2, ade4-1.7-22, ADGofTest-0.3, admisc-0.31, aggregation-1.0.1, AICcmodavg-2.3-1, akima-0.6-3.4, alabama-2022.4-1, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-1.0.0, animation-2.7, aod-1.3.2, apcluster-1.4.10, ape-5.7-1, aplot-0.1.10, argparse-2.2.2, aricode-1.0.2, arm-1.13-1, askpass-1.1, asnipe-1.1.16, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-3, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-10, aws-2.5-1, awsMethods-1.1-1, backports-1.4.1, bacr-1.0.1, bartMachine-1.3.3.1, bartMachineJARs-1.2.1, base, base64-2.0.1, base64enc-0.1-3, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-5, BayesPen-1.0, bayesplot-1.10.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25, BCEE-1.3.1, BDgraph-2.72, bdsmatrix-1.3-6, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.0, betareg-3.1-4, BH-1.81.0-1, BiasedUrn-2.0.9, bibtex-0.5.1, bigD-0.2.0, BIGL-1.7.0, bigmemory-4.6.1, bigmemory.sri-0.1.6, bindr-0.1.1, bindrcpp-0.2.2, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-2, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.4-7, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.8.1, bold-1.2.0, boot-1.3-28.1, bootstrap-2019.6, Boruta-8.0.0, brew-1.0-8, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brio-1.1.3, brms-2.19.0, Brobdingnag-1.2-9, broom-1.0.4, broom.helpers-1.12.0, broom.mixed-0.2.9.4, bslib-0.4.2, bst-0.3-24, cachem-1.0.7, Cairo-1.6-0, calibrate-1.7.7, callr-3.7.3, car-3.1-1, carData-3.0-5, caret-6.0-93, catlearn-0.9.1, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.2.10, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.1.0, chemometrics-1.4.2, chkptstanr-0.1.1, chron-2.3-60, circlize-0.4.15, circular-0.4-95, class-7.3-21, classInt-0.4-9, cld2-1.2.4, cli-3.6.0, clipr-0.8.0, clisymbols-1.2.0, clock-0.6.1, clue-0.3-64, cluster-2.1.4, clusterGeneration-1.3.7, clusterRepro-0.9, clustree-0.5.0, clValid-0.7, cmprsk-2.2-11, cNORM-3.0.2, cobalt-4.4.1, cobs-1.3-5, coda-0.19-4, codetools-0.2-19, coin-1.4-2, collapse-1.9.3, colorspace-2.1-0, colourpicker-1.2.0, combinat-0.0-8, ComICS-1.0.4, commonmark-1.8.1, compiler, ComplexUpset-1.3.3, compositions-2.0-5, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-2, corpcor-1.6.10, corrplot-0.92, covr-3.6.1, CovSel-1.2.1, covsim-1.0.0, cowplot-1.1.1, coxed-0.3.3, coxme-2.2-18.1, cpp11-0.4.3, crayon-1.5.2, credentials-1.3.2, crfsuite-0.4.1, crosstalk-1.2.0, crul-1.3, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.0.4.6, cubelyr-1.0.2, curl-5.0.0, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-11, d3Network-0.5.2.1, dagitty-0.3-1, data.table-1.14.8, data.tree-1.0.0, DataCombine-0.2.21, datasets, date-1.2-42, dbarts-0.9-23, DBI-1.1.3, dbplyr-2.3.1, dbscan-1.1-11, dcurver-0.9.2, ddalpha-1.3.13, deal-1.2-42, debugme-1.1.0, deldir-1.0-6, dendextend-1.16.0, DEoptim-2.2-8, DEoptimR-1.0-11, DepthProc-2.1.5, Deriv-4.1.3, desc-1.4.2, DescTools-0.99.48, deSolve-1.35, devtools-2.4.5, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.9, DiceKriging-1.6.0, dichromat-2.0-0.1, diffobj-0.3.5, digest-0.6.31, dimRed-0.2.6, diptest-0.76-0, DiscriMiner-0.1-29, dismo-1.3-9, distillery-1.2-1, distr-2.9.1, distrEx-2.9.0, distributional-0.3.1, DistributionUtils-0.6-0, diveRsity-1.9.90, dlm-1.1-6, DMCfun-2.0.2, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.0-2, downlit-0.4.2, downloader-0.4, dplyr-1.1.0, dr-3.0.10, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.27, dtangle-2.0.9, dtplyr-1.3.0, DTRreg-1.7, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-13, earth-5.3.2, EasyABC-1.5.2, ECOSolveR-0.5.5, elementR-1.3.7, ellipse-0.4.3, ellipsis-0.3.2, elliptic-1.4-0, emdbook-1.3.12, emmeans-1.8.5, emoa-0.5-0.1, emulator-1.2-21, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.7.0, epitools-0.5-10.1, ergm-4.4.0, ergm.count-4.1.1, estimability-1.4.1, evaluate-0.20, EValue-4.1.3, evd-2.3-6.1, Exact-3.2, expm-0.999-7, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-3, FactoMineR-2.7, FactorCopula-0.9.3, fail-1.3, fansi-1.0.4, farver-2.1.1, fastcluster-1.2.3, fastDummies-1.6.3, fasterize-1.0.4, fastICA-1.2-3, fastmap-1.1.1, fastmatch-1.1-3, fdrtool-1.2.17, feather-0.3.5, ff-4.0.9, fftw-1.0-7, fftwtools-0.9-11, fields-14.1, filehash-2.4-5, finalfit-1.0.6, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-8, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-1, flexmix-2.3-19, flextable-0.9.2, fma-2.5, FME-1.3.6.2, fmri-1.9.11, FNN-1.1.3.1, fontawesome-0.5.0, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.21, foreign-0.8-84, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-10, fpp-0.5, fracdiff-1.5-2, fs-1.6.1, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.32.0, future.apply-1.10.0, gam-1.22-1, gamlss-5.4-12, gamlss.data-6.0-2, gamlss.dist-6.0-5, gamlss.tr-5.1-7, gamm4-0.2-6, gap-1.5-1, gap.datasets-0.0.5, gapfill-0.9.6-1, gargle-1.3.0, gaussquad-1.0-3, gbm-2.1.8.1, gbRd-0.4-11, gclus-1.3.2, gdalUtilities-1.2.5, gdalUtils-2.0.3.2, gdata-2.18.0.1, gdistance-1.6, gdtools-0.3.3, gee-4.13-25, geeM-0.10.1, geepack-1.3.9, geex-1.1.1, geiger-2.0.10, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.8, geojson-0.3.5, geojsonio-0.11.3, geojsonsf-2.0.3, geometries-0.2.2, geometry-0.4.7, gert-1.9.2, getopt-1.20.3, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.1.2, ggbeeswarm-0.7.1, ggdag-0.2.7, ggExtra-0.10.0, ggfan-0.1.3, ggforce-0.4.1, ggformula-0.10.2, ggfun-0.0.9, ggh4x-0.2.3, ggnetwork-0.5.12, ggplot2-3.4.1, ggplotify-0.1.0, ggpubr-0.6.0, ggraph-2.1.0, ggrepel-0.9.3, ggridges-0.5.4, ggsci-3.0.0, ggsignif-0.6.4, ggstance-0.3.6, ggvenn-0.1.9, ggvis-0.4.8, gh-1.4.0, GillespieSSA-0.6.2, git2r-0.31.0, gitcreds-0.1.2, GJRM-0.2-6.1, glasso-1.11, gld-2.6.6, gllvm-1.4.1, glmmML-1.1.4, glmmTMB-1.1.5, glmnet-4.1-6, GlobalOptions-0.1.2, globals-0.16.2, glue-1.6.2, gmm-1.7, gmodels-2.18.1.1, gmp-0.7-1, gnumeric-0.7-8, goftest-1.2-3, gomms-1.0, googledrive-2.0.0, googlesheets4-1.0.1, gower-1.0.1, GPArotation-2022.10-2, gplots-3.1.3, graphics, graphlayouts-0.8.4, grDevices, grf-2.2.1, grid, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.2-0, grpreg-3.4.0, GSA-1.03.2, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.8.0, gtable-0.3.1, gtools-3.9.4, gtsummary-1.7.0, GUTS-1.2.3, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.40.0.1, hal9001-0.4.3, haldensify-0.2.3, hardhat-1.2.0, harmony-0.1.1, hash-2.2.6.2, haven-2.5.2, hdf5r-1.3.8, hdm-0.3.1, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.2, HGNChelper-0.8.1, HiddenMarkov-1.8-13, highr-0.10, Hmisc-5.0-1, hms-1.1.2, Hmsc-3.0-13, htmlTable-2.4.1, htmltools-0.5.4, htmlwidgets-1.6.1, httpcode-0.3.0, httpuv-1.6.9, httr-1.4.5, httr2-0.2.2, huge-1.3.5, hunspell-3.0.2, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.20, idr-1.3, ids-1.0.1, ie2misc-0.9.0, igraph-1.4.1, image.binarization-0.1.3, imager-0.42.18, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.0.1, infotheo-1.2.0.1, ini-0.3.1, inline-0.3.19, intergraph-2.0-2, interp-1.1-3, interpretR-0.2.4, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-18.1, isoband-0.2.7, ISOcodes-2022.09.29, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-3, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-4, jpeg-0.1-10, jqr-1.3.1, jquerylib-0.1.4, jsonify-1.2.2, jsonlite-1.8.4, jstable-1.0.7, juicyjuice-0.1.0, kde1d-1.0.5, kedd-1.0.3, kernlab-0.9-32, KernSmooth-2.23-20, kinship2-1.9.6, klaR-1.7-1, knitr-1.42, KODAMA-2.4, kohonen-3.0.11, ks-1.14.0, labdsv-2.0-1, labeling-0.4.2, labelled-2.10.0, laeken-0.5.2, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, later-1.3.0, lattice-0.20-45, latticeExtra-0.6-30, lava-1.7.2.1, lavaan-0.6-15, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.4.2, ldbounds-2.0.0, leafem-0.2.0, leaflet-2.1.2, leaflet.providers-1.9.0, leafsync-0.1.0, leaps-3.1, LearnBayes-2.15.1, leiden-0.4.3, lhs-1.1.6, libcoin-1.0-9, lifecycle-1.0.3, limSolve-1.5.6, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.0, lme4-1.1-32, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-2.9, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.7, logcondens-2.1.7, logger-0.2.2, logistf-1.24.1, logspline-2.1.19, longitudinal-1.1.13, longmemo-1.1-2, loo-2.5.1, lpSolve-5.6.18, lpSolveAPI-5.5.2.0-17.9, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.2, lwgeom-0.2-11, magic-1.6-1, magick-2.7.4, magrittr-2.0.3, MALDIquant-1.22, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.1, maptools-1.1-6, markdown-1.5, MASS-7.3-58.3, Matching-4.10-8, MatchIt-4.5.1, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.5-3, matrixcalc-1.0-6, MatrixModels-0.5-1, matrixStats-0.63.0, maxLik-1.5-2, maxlike-0.1-9, maxnet-0.1.4, mboost-2.9-7, mclogit-0.9.6, mclust-6.0.0, mcmc-0.9-7, MCMCpack-1.6-3, mcmcse-1.5-0, mda-0.5-3, medflex-0.6-7, mediation-4.5.0, memisc-0.99.31.6, memoise-2.0.1, memuse-4.2-3, MESS-0.5.9, metadat-1.2-0, metafor-3.8-1, MetaUtility-2.1.2, methods, mets-1.3.2, mgcv-1.8-42, mgsub-1.7.3, mhsmm-0.4.16, mi-1.1, mice-3.15.0, miceadds-3.16-18, microbenchmark-1.4.9, MIIVsem-0.5.8, mime-0.12, minerva-1.5.10, miniUI-0.1.1.1, minpack.lm-1.2-3, minqa-1.2.5, mirt-1.38.1, misc3d-0.9-1, miscTools-0.6-26, missForest-1.5, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-3, mlegp-3.1.9, MLmetrics-1.1.1, mlogit-1.1-1, mlr-2.19.1, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.10, modeltools-0.2-23, MODIStsp-2.1.0, momentfit-0.3, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.2.1, mpath-0.4-2.23, mRMRe-2.1.2, msm-1.7, mstate-0.3.2, multcomp-1.4-23, multcompView-0.1-8, multicool-0.1-12, multipol-1.0-7, munsell-0.5.0, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.1-3, nabor-0.5.0, naniar-1.0.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.21, NCmisc-1.2.0, network-1.18.1, networkDynamic-0.11.3, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-13, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.4, nlme-3.1-162, nloptr-2.0.3, NLP-0.2-1, nlsem-0.8, nnet-7.3-18, nnls-1.4, nonnest2-0.5-5, nor1mix-1.3-0, norm-1.0-10.0, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.7-10, OceanView-1.0.6, oddsratio-2.0.1, officer-0.6.2, openair-2.16-0, OpenMx-2.21.1, openssl-2.0.6, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2022-4.30, optmatch-0.10.6, optparse-1.7.3, ordinal-2022.11-16, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.0.8, outliers-0.15, packrat-0.9.1, pacman-0.5.1, pammtools-0.5.8, pamr-1.56.1, pan-1.6, parallel, parallelDist-0.2.6, parallelly-1.34.0, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-13, partykit-1.2-18, pastecs-1.3.21, patchwork-1.1.2, pbapply-1.7-0, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-3, pdp-0.8.1, PearsonDS-1.2.3, pec-2022.05.04, penalized-0.9-52, penfa-0.1.1, peperr-1.4, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.10, phytools-1.5-1, pillar-1.8.1, pim-2.0.2, pinfsc50-1.2.0, pixmap-0.4-12, pkgbuild-1.4.0, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.2, pkgmaker-0.32.8, plogr-0.2.0, plot3D-1.4, plot3Drgl-1.0.4, plotly-4.10.1, plotmo-3.6.2, plotrix-3.8-2, pls-2.8-1, plyr-1.8.8, PMA-1.2.1, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.22, Polychrome-1.5.1, polyclip-1.10-4, polycor-0.8-1, polynom-1.4-1, posterior-1.4.1, ppcor-1.1, prabclus-2.3-2, pracma-2.4.2, praise-1.0.0, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, prettyunits-1.1.1, princurve-2.1.6, pROC-1.18.0, processx-3.8.0, prodlim-2019.11.13, profileModel-0.6.1, proftools-0.99-3, profvis-0.3.7, progress-1.2.2, progressr-0.13.0, projpred-2.4.0, promises-1.2.0.1, proto-1.0.0, protolite-2.3.0, proxy-0.4-27, proxyC-0.3.3, pryr-0.1.6, ps-1.7.2, pscl-1.5.5, pspline-1.0-19, psych-2.2.9, Publish-2023.01.17, pulsar-0.3.10, purrr-1.0.1, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.3, qqman-0.1.8, qrnn-2.0.5, quadprog-1.5-8, quanteda-3.3.0, quantmod-0.4.20, quantreg-5.94, questionr-0.7.8, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.25.0, R.rsp-0.45.0, R.utils-2.12.2, R2WinBUGS-2.1-21, R6-2.5.1, ragg-1.2.5, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.1, randtoolbox-2.0.4, rangeModelMetadata-0.1.4, ranger-0.14.1, RANN-2.6.1, rapidjsonr-1.2.0, rappdirs-0.3.3, rARPACK-0.11-0, raster-3.6-20, rasterVis-0.51.5, ratelimitr-0.4.1, RBesT-1.6-6, rbibutils-2.2.13, rbison-1.0.0, Rborist-0.3-2, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, rcmdcheck-1.4.0, RColorBrewer-1.1-3, Rcpp-1.0.10, RcppArmadillo-0.12.0.1.0, RcppEigen-0.3.3.9.3, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.3, RcppTOML-0.2.2, RCurl-1.98-1.10, rda-1.2-1, Rdpack-2.4, rdrop2-0.8.2.1, readbitmap-0.1.5, reader-1.0.6, readODS-1.8.0, readr-2.1.4, readxl-1.4.2, rebird-1.3.0, recipes-1.0.5, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.3, relsurv-2.2-9, rematch-1.0.1, rematch2-2.1.2, remotes-2.4.2, rentrez-1.2.3, renv-0.17.1, reprex-2.0.2, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.28, rex-1.2.1, rgbif-3.7.5, RGCCA-2.1.2, rgdal-1.6-5, rgeos-0.6-2, rgexf-0.16.2, rgl-1.0.1, Rglpk-0.6-4, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.6, RInside-0.2.18, rio-0.5.29, riskRegression-2022.11.28, ritis-1.0.0, RItools-0.3-3, rJava-1.0-6, rjson-0.2.21, RJSONIO-1.3-1.8, rlang-1.1.0, rle-0.9.2, rlecuyer-0.3-5, rlemon-0.2.1, rlist-0.4.6.2, rmarkdown-2.20, rmeta-3.0, Rmpfr-0.9-1, rms-6.5-0, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.4.5, robustbase-0.95-0, ROCR-1.0-11, ROI-1.0-0, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.3, roptim-0.1.6, rotl-3.0.14, roxygen2-7.2.3, rpact-3.3.4, rpart-4.1.19, rpf-1.0.11, RPMM-1.25, rprojroot-2.0.3, rrcov-1.7-2, rredlist-0.7.1, rsample-1.1.1, rsconnect-0.8.29, Rserve-1.8-11, RSNNS-0.4-15, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.0, Rssa-1.0.5, rstan-2.21.8, rstantools-2.3.0, rstatix-0.7.2, rstudioapi-0.14, rtdists-0.11-5, Rtsne-0.16, Rttf2pt1-1.3.12, RUnit-0.4.32, ruv-0.9.7.1, rversions-2.1.2, rvertnet-0.8.2, rvest-1.0.3, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.2, sampling-2.9, sandwich-3.0-2, sass-0.4.5, SBdecomp-1.2, scales-1.2.1, scam-1.2-13, scatterpie-0.1.8, scatterplot3d-0.3-43, scs-3.2.4, sctransform-0.3.5, SDMTools-1.1-221.2, seewave-2.2.0, segmented-1.6-2, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-23, servr-0.25, sessioninfo-1.2.2, setRNG-2022.4-1, sf-1.0-11, sfheaders-0.4.2, sfsmisc-1.1-14, shadowtext-0.1.2, shape-1.4.6, shapefiles-0.7.2, shiny-1.7.4, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-0.7-7, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.0, sm-2.2-5.7.1, smoof-1.6.0.3, smoother-1.1, sn-2.1.0, sna-2.7-1, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.0, snowfall-1.84-6.2, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sourcetools-0.1.7-1, sp-1.6-0, spaa-0.2.2, spam-2.9-1, spaMM-4.2.1, SparseM-1.81, SPAtest-3.1.2, spatial-7.3-16, spatstat-3.0-3, spatstat.core-2.4-4, spatstat.data-3.0-1, spatstat.explore-3.1-0, spatstat.geom-3.1-0, spatstat.linnet-3.0-6, spatstat.model-3.2-1, spatstat.random-3.1-4, spatstat.sparse-3.0-1, spatstat.utils-3.0-2, spData-2.2.2, splines, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.1, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.21.0-7, stargazer-5.2.3, stars-0.6-0, startupmsg-0.9.6, StatMatch-1.4.1, statmod-1.5.0, statnet-2019.6, statnet.common-4.8.0, stats, stats4, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.10, stringi-1.7.12, stringr-1.5.0, strucchange-1.5-3, styler-1.9.1, subplex-1.8, SuperLearner-2.0-28, SuppDists-1.1-9.7, survey-4.1-1, survival-3.5-5, survivalROC-1.0.3.1, svd-0.5.3, svglite-2.1.1, swagger-3.33.1, symmoments-1.2.1, sys-3.4.1, systemfonts-1.0.4, tableone-0.13.2, tabletools-0.1.0, tau-0.0-24, taxize-0.9.100, tcltk, tcltk2-1.2-11, tclust-1.5-2, TeachingDemos-2.12, tensor-1.5, tensorA-0.36.2, tergm-4.1.1, terra-1.7-18, testit-0.13, testthat-3.1.7, textcat-1.0-8, textplot-0.2.2, textshaping-0.3.6, TFisher-0.2.0, TH.data-1.1-1, threejs-0.3.3, tibble-3.2.0, tictoc-1.1, tidygraph-1.2.3, tidyr-1.3.0, tidyselect-1.2.0, tidytext-0.4.1, tidytree-0.4.2, tidyverse-2.0.0, tiff-0.1-11, timechange-0.2.0, timeDate-4022.108, timereg-2.0.5, tinytex-0.44, tkrplot-0.0-27, tm-0.7-11, tmap-3.3-3, tmaptools-3.1-1, TMB-1.9.2, tmle-1.5.0.2, tmvnsim-1.0-2, tmvtnorm-1.5, tokenizers-0.3.0, tools, topicmodels-0.2-13, TraMineR-2.2-6, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-8, trust-0.1-8, tseries-0.10-53, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.3, tuneR-1.4.3, twang-2.5, tweedie-2.3.5, tweenr-2.0.2, tzdb-0.3.0, ucminf-1.1-4.1, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-0.8.0, uniqueAtomMat-0.1-3-2, units-0.8-1, unmarked-1.2.5, UpSetR-1.4.0, urca-1.3-3, urlchecker-1.0.1, urltools-1.7.3, uroot-2.1-2, usethis-2.1.6, utf8-1.2.3, utils, uuid-1.1-0, V8-4.2.2, varhandle-2.0.5, vcd-1.4-11, vcfR-1.14.0, vctrs-0.6.0, vegan-2.6-4, VennDiagram-1.7.3, VGAM-1.1-8, VIM-6.2.2, VineCopula-2.4.5, vioplot-0.4.0, vipor-0.4.5, viridis-0.6.2, viridisLite-0.4.1, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.1, VSURF-1.2.0, waldo-0.4.0, warp-0.2.0, waveslim-1.8.4, wdm-0.2.3, webshot-0.5.4, webutils-1.1, weights-1.0.4, WeightSVM-1.7-11, wellknown-0.7.4, whisker-0.4.1, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.5.0, wikitaxa-0.4.0, withr-2.5.0, wk-0.7.1, word2vec-0.3.4, wordcloud-2.6, worrms-0.4.2, WriteXLS-6.4.0, xfun-0.37, xgboost-1.7.3.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.13, xml2-1.3.3, xopen-1.0.0, xtable-1.8-4, xts-0.13.0, yaImpute-1.0-33, yaml-2.3.7, yulab.utils-0.0.6, zeallot-0.1.0, zip-2.2.2, zoo-1.8-11

"},{"location":"available_software/detail/RE2/","title":"RE2","text":"

RE2 is a fast, safe, thread-friendly alternative to backtracking regularexpression engines like those used in PCRE, Perl, and Python. It is a C++library.

https://github.com/google/re2

"},{"location":"available_software/detail/RE2/#available-modules","title":"Available modules","text":"

The overview below shows which RE2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using RE2, load one of these modules using a module load command like:

module load RE2/2024-03-01-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 RE2/2024-03-01-GCCcore-13.2.0 x x x x x x x x x RE2/2023-08-01-GCCcore-12.3.0 x x x x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ROOT/","title":"ROOT","text":"

The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficient way.

https://root.cern.ch

"},{"location":"available_software/detail/ROOT/#available-modules","title":"Available modules","text":"

The overview below shows which ROOT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ROOT, load one of these modules using a module load command like:

module load ROOT/6.30.06-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ROOT/6.30.06-foss-2023a x x x x x x x x x ROOT/6.26.10-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/RapidJSON/","title":"RapidJSON","text":"

A fast JSON parser/generator for C++ with both SAX/DOM style API

https://rapidjson.org

"},{"location":"available_software/detail/RapidJSON/#available-modules","title":"Available modules","text":"

The overview below shows which RapidJSON installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using RapidJSON, load one of these modules using a module load command like:

module load RapidJSON/1.1.0-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x - x x RapidJSON/1.1.0-20240409-GCCcore-13.2.0 x x x x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Raptor/","title":"Raptor","text":"

Set of parsers and serializers that generate Resource Description Framework(RDF) triples by parsing syntaxes or serialize the triples into a syntax.

https://librdf.org/raptor/

"},{"location":"available_software/detail/Raptor/#available-modules","title":"Available modules","text":"

The overview below shows which Raptor installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Raptor, load one of these modules using a module load command like:

module load Raptor/2.0.16-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Raptor/2.0.16-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Rasqal/","title":"Rasqal","text":"

A library handling RDF query syntaxes, construction and execution

hhttps://librdf.org/rasqal

"},{"location":"available_software/detail/Rasqal/#available-modules","title":"Available modules","text":"

The overview below shows which Rasqal installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rasqal, load one of these modules using a module load command like:

module load Rasqal/0.9.33-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rasqal/0.9.33-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ReFrame/","title":"ReFrame","text":"

ReFrame is a framework for writing regression tests for HPC systems.

https://github.com/reframe-hpc/reframe

"},{"location":"available_software/detail/ReFrame/#available-modules","title":"Available modules","text":"

The overview below shows which ReFrame installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ReFrame, load one of these modules using a module load command like:

module load ReFrame/4.6.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ReFrame/4.6.2 x x x x x x x x x ReFrame/4.3.3 x x x x x x x x x"},{"location":"available_software/detail/ReFrame/#reframe462","title":"ReFrame/4.6.2","text":"

This is a list of extensions included in the module:

pip-24.0, reframe-4.6.2, setuptools-68.0.0, wheel-0.42.0

"},{"location":"available_software/detail/ReFrame/#reframe433","title":"ReFrame/4.3.3","text":"

This is a list of extensions included in the module:

pip-21.3.1, reframe-4.3.3, wheel-0.37.1

"},{"location":"available_software/detail/Redland/","title":"Redland","text":"

Redland is a set of free software C libraries that provide support for the Resource Description Framework (RDF).

https://librdf.org/raptor

"},{"location":"available_software/detail/Redland/#available-modules","title":"Available modules","text":"

The overview below shows which Redland installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Redland, load one of these modules using a module load command like:

module load Redland/1.0.17-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Redland/1.0.17-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Rivet/","title":"Rivet","text":"

Rivet toolkit (Robust Independent Validation of Experiment and Theory)To use your own analysis you must append the path to RIVET_ANALYSIS_PATH.

https://gitlab.com/hepcedar/rivet

"},{"location":"available_software/detail/Rivet/#available-modules","title":"Available modules","text":"

The overview below shows which Rivet installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rivet, load one of these modules using a module load command like:

module load Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6 x x x x x x x x x"},{"location":"available_software/detail/Ruby/","title":"Ruby","text":"

Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.

https://www.ruby-lang.org

"},{"location":"available_software/detail/Ruby/#available-modules","title":"Available modules","text":"

The overview below shows which Ruby installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ruby, load one of these modules using a module load command like:

module load Ruby/3.3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ruby/3.3.0-GCCcore-12.3.0 x x x x x x x x x Ruby/3.2.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Ruby/#ruby322-gcccore-1220","title":"Ruby/3.2.2-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

activesupport-5.2.8.1, addressable-2.8.4, arr-pm-0.0.12, backports-3.24.1, bundler-2.4.14, cabin-0.9.0, childprocess-4.1.0, clamp-1.3.2, concurrent-ruby-1.2.2, connection_pool-2.4.1, diff-lcs-1.5.0, ethon-0.16.0, faraday-1.2.0, faraday-net_http-3.0.2, faraday_middleware-1.2.0, ffi-1.15.5, gh-0.18.0, highline-2.1.0, i18n-1.14.1, json-2.6.3, launchy-2.5.2, minitest-5.18.0, multi_json-1.15.0, multipart-post-2.3.0, mustermann-3.0.0, net-http-persistent-2.9.4, net-http-pipeline-1.0.1, public_suffix-5.0.1, pusher-client-0.6.2, rack-2.2.4, rack-protection-3.0.6, rack-test-2.1.0, rspec-3.12.0, rspec-core-3.12.2, rspec-expectations-3.12.3, rspec-mocks-3.12.5, rspec-support-3.12.0, ruby2_keywords-0.0.5, sinatra-3.0.6, thread_safe-0.3.6, tilt-2.2.0, typhoeus-1.4.0, tzinfo-1.1.0, websocket-1.2.9, zeitwerk-2.6.8

"},{"location":"available_software/detail/Rust/","title":"Rust","text":"

Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

https://www.rust-lang.org

"},{"location":"available_software/detail/Rust/#available-modules","title":"Available modules","text":"

The overview below shows which Rust installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rust, load one of these modules using a module load command like:

module load Rust/1.76.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rust/1.76.0-GCCcore-13.2.0 x x x x x x x x x Rust/1.75.0-GCCcore-12.3.0 x x x x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SAMtools/","title":"SAMtools","text":"

SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.

https://www.htslib.org/

"},{"location":"available_software/detail/SAMtools/#available-modules","title":"Available modules","text":"

The overview below shows which SAMtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SAMtools, load one of these modules using a module load command like:

module load SAMtools/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SAMtools/1.18-GCC-12.3.0 x x x x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SCOTCH/","title":"SCOTCH","text":"

Software package and libraries for sequential and parallel graph partitioning,static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.

https://www.labri.fr/perso/pelegrin/scotch/

"},{"location":"available_software/detail/SCOTCH/#available-modules","title":"Available modules","text":"

The overview below shows which SCOTCH installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SCOTCH, load one of these modules using a module load command like:

module load SCOTCH/7.0.3-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SCOTCH/7.0.3-gompi-2023a x x x x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/SDL2/","title":"SDL2","text":"

SDL: Simple DirectMedia Layer, a cross-platform multimedia library

https://www.libsdl.org/

"},{"location":"available_software/detail/SDL2/#available-modules","title":"Available modules","text":"

The overview below shows which SDL2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SDL2, load one of these modules using a module load command like:

module load SDL2/2.28.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SDL2/2.28.5-GCCcore-13.2.0 x x x x x x x x x SDL2/2.28.2-GCCcore-12.3.0 x x x x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SEPP/","title":"SEPP","text":"

SATe-enabled Phylogenetic Placement - addresses the problem of phylogeneticplacement of short reads into reference alignments and trees.

https://github.com/smirarab/sepp

"},{"location":"available_software/detail/SEPP/#available-modules","title":"Available modules","text":"

The overview below shows which SEPP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SEPP, load one of these modules using a module load command like:

module load SEPP/4.5.1-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SEPP/4.5.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/SIONlib/","title":"SIONlib","text":"

SIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.

https://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Software/SIONlib/_node.html

"},{"location":"available_software/detail/SIONlib/#available-modules","title":"Available modules","text":"

The overview below shows which SIONlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SIONlib, load one of these modules using a module load command like:

module load SIONlib/1.7.7-GCCcore-13.2.0-tools\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIONlib/1.7.7-GCCcore-13.2.0-tools x x x x x x x x x"},{"location":"available_software/detail/SIP/","title":"SIP","text":"

SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries.

http://www.riverbankcomputing.com/software/sip/

"},{"location":"available_software/detail/SIP/#available-modules","title":"Available modules","text":"

The overview below shows which SIP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SIP, load one of these modules using a module load command like:

module load SIP/6.8.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIP/6.8.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/SLEPc/","title":"SLEPc","text":"

SLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.

https://slepc.upv.es

"},{"location":"available_software/detail/SLEPc/#available-modules","title":"Available modules","text":"

The overview below shows which SLEPc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SLEPc, load one of these modules using a module load command like:

module load SLEPc/3.20.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SLEPc/3.20.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SQLAlchemy/","title":"SQLAlchemy","text":"

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that givesapplication developers the full power and flexibility of SQL. SQLAlchemyprovides a full suite of well known enterprise-level persistence patterns,designed for efficient and high-performing database access, adapted into asimple and Pythonic domain language.

https://www.sqlalchemy.org/

"},{"location":"available_software/detail/SQLAlchemy/#available-modules","title":"Available modules","text":"

The overview below shows which SQLAlchemy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SQLAlchemy, load one of these modules using a module load command like:

module load SQLAlchemy/2.0.25-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLAlchemy/2.0.25-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/SQLAlchemy/#sqlalchemy2025-gcccore-1230","title":"SQLAlchemy/2.0.25-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

alembic-1.13.1, async-timeout-4.0.3, asyncpg-0.29.0, greenlet-3.0.3, SQLAlchemy-2.0.25

"},{"location":"available_software/detail/SQLite/","title":"SQLite","text":"

SQLite: SQL Database Engine in a C Library

https://www.sqlite.org/

"},{"location":"available_software/detail/SQLite/#available-modules","title":"Available modules","text":"

The overview below shows which SQLite installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SQLite, load one of these modules using a module load command like:

module load SQLite/3.43.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLite/3.43.1-GCCcore-13.2.0 x x x x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/STAR/","title":"STAR","text":"

STAR aligns RNA-seq reads to a reference genome using uncompressed suffix arrays.

https://github.com/alexdobin/STAR

"},{"location":"available_software/detail/STAR/#available-modules","title":"Available modules","text":"

The overview below shows which STAR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using STAR, load one of these modules using a module load command like:

module load STAR/2.7.11b-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 STAR/2.7.11b-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/SWIG/","title":"SWIG","text":"

SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.

http://www.swig.org/

"},{"location":"available_software/detail/SWIG/#available-modules","title":"Available modules","text":"

The overview below shows which SWIG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SWIG, load one of these modules using a module load command like:

module load SWIG/4.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SWIG/4.1.1-GCCcore-13.2.0 x x x x x x x x x SWIG/4.1.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ScaFaCoS/","title":"ScaFaCoS","text":"

ScaFaCoS is a library of scalable fast coulomb solvers.

http://www.scafacos.de/

"},{"location":"available_software/detail/ScaFaCoS/#available-modules","title":"Available modules","text":"

The overview below shows which ScaFaCoS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ScaFaCoS, load one of these modules using a module load command like:

module load ScaFaCoS/1.0.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaFaCoS/1.0.4-foss-2023b - - - x x x x x x ScaFaCoS/1.0.4-foss-2023a - - - x x x x x x"},{"location":"available_software/detail/ScaLAPACK/","title":"ScaLAPACK","text":"

The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.

https://www.netlib.org/scalapack/

"},{"location":"available_software/detail/ScaLAPACK/#available-modules","title":"Available modules","text":"

The overview below shows which ScaLAPACK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ScaLAPACK, load one of these modules using a module load command like:

module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x - x x"},{"location":"available_software/detail/SciPy-bundle/","title":"SciPy-bundle","text":"

Bundle of Python packages for scientific software

https://python.org/

"},{"location":"available_software/detail/SciPy-bundle/#available-modules","title":"Available modules","text":"

The overview below shows which SciPy-bundle installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SciPy-bundle, load one of these modules using a module load command like:

module load SciPy-bundle/2023.11-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciPy-bundle/2023.11-gfbf-2023b x x x x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202311-gfbf-2023b","title":"SciPy-bundle/2023.11-gfbf-2023b","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.1, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.7, numpy-1.26.2, pandas-2.1.3, ply-3.11, pythran-0.14.0, scipy-1.11.4, tzdata-2023.3, versioneer-0.29

"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202307-gfbf-2023a","title":"SciPy-bundle/2023.07-gfbf-2023a","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.0, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.4, numpy-1.25.1, pandas-2.0.3, ply-3.11, pythran-0.13.1, scipy-1.11.1, tzdata-2023.3, versioneer-0.29

"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202302-gfbf-2022b","title":"SciPy-bundle/2023.02-gfbf-2022b","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.5, deap-1.3.3, gast-0.5.3, mpmath-1.2.1, numexpr-2.8.4, numpy-1.24.2, pandas-1.5.3, ply-3.11, pythran-0.12.1, scipy-1.10.1

"},{"location":"available_software/detail/SciTools-Iris/","title":"SciTools-Iris","text":"

A powerful, format-agnostic, community-driven Python package for analysing andvisualising Earth science data.

https://scitools-iris.readthedocs.io

"},{"location":"available_software/detail/SciTools-Iris/#available-modules","title":"Available modules","text":"

The overview below shows which SciTools-Iris installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SciTools-Iris, load one of these modules using a module load command like:

module load SciTools-Iris/3.9.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciTools-Iris/3.9.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SciTools-Iris/#scitools-iris390-foss-2023a","title":"SciTools-Iris/3.9.0-foss-2023a","text":"

This is a list of extensions included in the module:

antlr4-python3-runtime-4.7.2, cf-units-3.2.0, scitools_iris-3.9.0

"},{"location":"available_software/detail/Score-P/","title":"Score-P","text":"

The Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.

https://www.score-p.org

"},{"location":"available_software/detail/Score-P/#available-modules","title":"Available modules","text":"

The overview below shows which Score-P installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Score-P, load one of these modules using a module load command like:

module load Score-P/8.4-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Score-P/8.4-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/Seaborn/","title":"Seaborn","text":"

Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.

https://seaborn.pydata.org/

"},{"location":"available_software/detail/Seaborn/#available-modules","title":"Available modules","text":"

The overview below shows which Seaborn installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Seaborn, load one of these modules using a module load command like:

module load Seaborn/0.13.2-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Seaborn/0.13.2-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/Shapely/","title":"Shapely","text":"

Shapely is a BSD-licensed Python package for manipulation and analysis of planar geometric objects.It is based on the widely deployed GEOS (the engine of PostGIS) and JTS (from which GEOS is ported) libraries.

https://github.com/Toblerity/Shapely

"},{"location":"available_software/detail/Shapely/#available-modules","title":"Available modules","text":"

The overview below shows which Shapely installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Shapely, load one of these modules using a module load command like:

module load Shapely/2.0.1-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Shapely/2.0.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/SlurmViewer/","title":"SlurmViewer","text":"

View the status of a Slurm cluster, including nodes and queue.

https://gitlab.com/lkeb/slurm_viewer

"},{"location":"available_software/detail/SlurmViewer/#available-modules","title":"Available modules","text":"

The overview below shows which SlurmViewer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SlurmViewer, load one of these modules using a module load command like:

module load SlurmViewer/1.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SlurmViewer/1.0.1-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/SlurmViewer/#slurmviewer101-gcccore-1320","title":"SlurmViewer/1.0.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

asyncssh-2.18.0, plotext-5.2.8, slurm-viewer-1.0.1, textual-0.85.2, textual-plotext-0.2.1

"},{"location":"available_software/detail/Solids4foam/","title":"Solids4foam","text":"

A toolbox for performing solid mechanics and fluid-solid interactions in OpenFOAM.

https://www.solids4foam.com/

"},{"location":"available_software/detail/Solids4foam/#available-modules","title":"Available modules","text":"

The overview below shows which Solids4foam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Solids4foam, load one of these modules using a module load command like:

module load Solids4foam/2.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Solids4foam/2.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SuiteSparse/","title":"SuiteSparse","text":"

SuiteSparse is a collection of libraries to manipulate sparse matrices.

https://faculty.cse.tamu.edu/davis/suitesparse.html

"},{"location":"available_software/detail/SuiteSparse/#available-modules","title":"Available modules","text":"

The overview below shows which SuiteSparse installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SuiteSparse, load one of these modules using a module load command like:

module load SuiteSparse/7.1.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SuiteSparse/7.1.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SuperLU_DIST/","title":"SuperLU_DIST","text":"

SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines.

https://crd-legacy.lbl.gov/~xiaoye/SuperLU/

"},{"location":"available_software/detail/SuperLU_DIST/#available-modules","title":"Available modules","text":"

The overview below shows which SuperLU_DIST installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SuperLU_DIST, load one of these modules using a module load command like:

module load SuperLU_DIST/8.1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SuperLU_DIST/8.1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Szip/","title":"Szip","text":"

Szip compression software, providing lossless compression of scientific data

https://www.hdfgroup.org/doc_resource/SZIP/

"},{"location":"available_software/detail/Szip/#available-modules","title":"Available modules","text":"

The overview below shows which Szip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Szip, load one of these modules using a module load command like:

module load Szip/2.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Szip/2.1.1-GCCcore-13.2.0 x x x x x x x x x Szip/2.1.1-GCCcore-12.3.0 x x x x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tcl/","title":"Tcl","text":"

Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.

https://www.tcl.tk/

"},{"location":"available_software/detail/Tcl/#available-modules","title":"Available modules","text":"

The overview below shows which Tcl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tcl, load one of these modules using a module load command like:

module load Tcl/8.6.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tcl/8.6.13-GCCcore-13.2.0 x x x x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/TensorFlow/","title":"TensorFlow","text":"

An open-source software library for Machine Intelligence

https://www.tensorflow.org/

"},{"location":"available_software/detail/TensorFlow/#available-modules","title":"Available modules","text":"

The overview below shows which TensorFlow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using TensorFlow, load one of these modules using a module load command like:

module load TensorFlow/2.13.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 TensorFlow/2.13.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/TensorFlow/#tensorflow2130-foss-2023a","title":"TensorFlow/2.13.0-foss-2023a","text":"

This is a list of extensions included in the module:

absl-py-1.4.0, astor-0.8.1, astunparse-1.6.3, cachetools-5.3.1, google-auth-2.22.0, google-auth-oauthlib-1.0.0, google-pasta-0.2.0, grpcio-1.57.0, gviz-api-1.10.0, keras-2.13.1, Markdown-3.4.4, oauthlib-3.2.2, opt-einsum-3.3.0, portpicker-1.5.2, pyasn1-modules-0.3.0, requests-oauthlib-1.3.1, rsa-4.9, tblib-2.0.0, tensorboard-2.13.0, tensorboard-data-server-0.7.1, tensorboard-plugin-profile-2.13.1, tensorboard-plugin-wit-1.8.1, TensorFlow-2.13.0, tensorflow-estimator-2.13.0, termcolor-2.3.0, Werkzeug-2.3.7, wrapt-1.15.0

"},{"location":"available_software/detail/Tk/","title":"Tk","text":"

Tk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.

https://www.tcl.tk/

"},{"location":"available_software/detail/Tk/#available-modules","title":"Available modules","text":"

The overview below shows which Tk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tk, load one of these modules using a module load command like:

module load Tk/8.6.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tk/8.6.13-GCCcore-13.2.0 x x x x x x x x x Tk/8.6.13-GCCcore-12.3.0 x x x x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tkinter/","title":"Tkinter","text":"

Tkinter module, built with the Python buildsystem

https://python.org/

"},{"location":"available_software/detail/Tkinter/#available-modules","title":"Available modules","text":"

The overview below shows which Tkinter installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tkinter, load one of these modules using a module load command like:

module load Tkinter/3.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tkinter/3.11.5-GCCcore-13.2.0 x x x x x x x x x Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tombo/","title":"Tombo","text":"

Tombo is a suite of tools primarily for the identification of modified nucleotides from raw nanopore sequencing data.

https://github.com/nanoporetech/tombo

"},{"location":"available_software/detail/Tombo/#available-modules","title":"Available modules","text":"

The overview below shows which Tombo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tombo, load one of these modules using a module load command like:

module load Tombo/1.5.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tombo/1.5.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Tombo/#tombo151-foss-2023a","title":"Tombo/1.5.1-foss-2023a","text":"

This is a list of extensions included in the module:

mappy-2.28, ont-tombo-1.5.1, pyfaidx-0.5.8

"},{"location":"available_software/detail/Transrate/","title":"Transrate","text":"

Transrate is software for de-novo transcriptome assembly quality analysis. It examines your assembly in detail and compares it to experimental evidence such as the sequencing reads, reporting quality scores for contigs and assemblies. This allows you to choose between assemblers and parameters, filter out the bad contigs from an assembly, and help decide when to stop trying to improve the assembly.

https://hibberdlab.com/transrate

"},{"location":"available_software/detail/Transrate/#available-modules","title":"Available modules","text":"

The overview below shows which Transrate installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Transrate, load one of these modules using a module load command like:

module load Transrate/1.0.3-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Transrate/1.0.3-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/UCC-CUDA/","title":"UCC-CUDA","text":"

UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.This module adds the UCC CUDA support.

https://www.openucx.org/

"},{"location":"available_software/detail/UCC-CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which UCC-CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCC-CUDA, load one of these modules using a module load command like:

module load UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/UCC/","title":"UCC","text":"

UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.

https://www.openucx.org/

"},{"location":"available_software/detail/UCC/#available-modules","title":"Available modules","text":"

The overview below shows which UCC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCC, load one of these modules using a module load command like:

module load UCC/1.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC/1.2.0-GCCcore-13.2.0 x x x x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UCX-CUDA/","title":"UCX-CUDA","text":"

Unified Communication XAn open-source production grade communication framework for data centricand high-performance applicationsThis module adds the UCX CUDA support.

http://www.openucx.org/

"},{"location":"available_software/detail/UCX-CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which UCX-CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCX-CUDA, load one of these modules using a module load command like:

module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/UCX/","title":"UCX","text":"

Unified Communication XAn open-source production grade communication framework for data centricand high-performance applications

https://www.openucx.org/

"},{"location":"available_software/detail/UCX/#available-modules","title":"Available modules","text":"

The overview below shows which UCX installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCX, load one of these modules using a module load command like:

module load UCX/1.15.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX/1.15.0-GCCcore-13.2.0 x x x x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UDUNITS/","title":"UDUNITS","text":"

UDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.

https://www.unidata.ucar.edu/software/udunits/

"},{"location":"available_software/detail/UDUNITS/#available-modules","title":"Available modules","text":"

The overview below shows which UDUNITS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UDUNITS, load one of these modules using a module load command like:

module load UDUNITS/2.2.28-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UDUNITS/2.2.28-GCCcore-13.2.0 x x x x x x x x x UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UnZip/","title":"UnZip","text":"

UnZip is an extraction utility for archives compressedin .zip format (also called \"zipfiles\"). Although highly compatible bothwith PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP'sown Zip program, our primary objectives have been portability andnon-MSDOS functionality.

http://www.info-zip.org/UnZip.html

"},{"location":"available_software/detail/UnZip/#available-modules","title":"Available modules","text":"

The overview below shows which UnZip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UnZip, load one of these modules using a module load command like:

module load UnZip/6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UnZip/6.0-GCCcore-13.2.0 x x x x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/VCFtools/","title":"VCFtools","text":"

The aim of VCFtools is to provide easily accessible methods for working with complex genetic variation data in the form of VCF files.

https://vcftools.github.io

"},{"location":"available_software/detail/VCFtools/#available-modules","title":"Available modules","text":"

The overview below shows which VCFtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using VCFtools, load one of these modules using a module load command like:

module load VCFtools/0.1.16-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 VCFtools/0.1.16-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/VTK/","title":"VTK","text":"

The Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.

https://www.vtk.org

"},{"location":"available_software/detail/VTK/#available-modules","title":"Available modules","text":"

The overview below shows which VTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using VTK, load one of these modules using a module load command like:

module load VTK/9.3.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 VTK/9.3.0-foss-2023b x x x x x x x x x VTK/9.3.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Valgrind/","title":"Valgrind","text":"

Valgrind: Debugging and profiling tools

https://valgrind.org

"},{"location":"available_software/detail/Valgrind/#available-modules","title":"Available modules","text":"

The overview below shows which Valgrind installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Valgrind, load one of these modules using a module load command like:

module load Valgrind/3.23.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Valgrind/3.23.0-gompi-2023b x x x x x x x x x Valgrind/3.21.0-gompi-2023a x x x x x x x x x Valgrind/3.21.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/Vim/","title":"Vim","text":"

Vim is an advanced text editor that seeks to provide the power of the de-facto Unix editor 'Vi', with a more complete feature set.

http://www.vim.org

"},{"location":"available_software/detail/Vim/#available-modules","title":"Available modules","text":"

The overview below shows which Vim installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Vim, load one of these modules using a module load command like:

module load Vim/9.1.0004-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Vim/9.1.0004-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Voro%2B%2B/","title":"Voro++","text":"

Voro++ is a software library for carrying out three-dimensional computations of the Voronoitessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations,computing the Voronoi cell for each particle individually. It is particularly well-suited for applications thatrely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be usedto analyze a system of particles.

http://math.lbl.gov/voro++/

"},{"location":"available_software/detail/Voro%2B%2B/#available-modules","title":"Available modules","text":"

The overview below shows which Voro++ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Voro++, load one of these modules using a module load command like:

module load Voro++/0.4.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Voro++/0.4.6-GCCcore-13.2.0 x x x x x x x x x Voro++/0.4.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/WCSLIB/","title":"WCSLIB","text":"

The FITS \"World Coordinate System\" (WCS) standard defines keywordsand usage that provide for the description of astronomical coordinate systems in aFITS image header.

https://www.atnf.csiro.au/people/mcalabre/WCS/

"},{"location":"available_software/detail/WCSLIB/#available-modules","title":"Available modules","text":"

The overview below shows which WCSLIB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WCSLIB, load one of these modules using a module load command like:

module load WCSLIB/7.11-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WCSLIB/7.11-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/WRF/","title":"WRF","text":"

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.

https://www.wrf-model.org

"},{"location":"available_software/detail/WRF/#available-modules","title":"Available modules","text":"

The overview below shows which WRF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WRF, load one of these modules using a module load command like:

module load WRF/4.4.1-foss-2022b-dmpar\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WRF/4.4.1-foss-2022b-dmpar x x x x x x - x x"},{"location":"available_software/detail/WSClean/","title":"WSClean","text":"

WSClean (w-stacking clean) is a fast generic widefield imager.It implements several gridding algorithms and offers fully-automated multi-scalemulti-frequency deconvolution.

https://wsclean.readthedocs.io/

"},{"location":"available_software/detail/WSClean/#available-modules","title":"Available modules","text":"

The overview below shows which WSClean installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WSClean, load one of these modules using a module load command like:

module load WSClean/3.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WSClean/3.4-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/Wayland/","title":"Wayland","text":"

Wayland is a project to define a protocol for a compositor to talk to its clients as well as a library implementation of the protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.

https://wayland.freedesktop.org/

"},{"location":"available_software/detail/Wayland/#available-modules","title":"Available modules","text":"

The overview below shows which Wayland installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Wayland, load one of these modules using a module load command like:

module load Wayland/1.22.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Wayland/1.22.0-GCCcore-13.2.0 x x x x x x x x x Wayland/1.22.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/WhatsHap/","title":"WhatsHap","text":"

WhatsHap is a software for phasing genomic variants using DNAsequencing reads, also called read-based phasing or haplotype assembly. It isespecially suitable for long reads, but works also well with short reads.

https://whatshap.readthedocs.io

"},{"location":"available_software/detail/WhatsHap/#available-modules","title":"Available modules","text":"

The overview below shows which WhatsHap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WhatsHap, load one of these modules using a module load command like:

module load WhatsHap/2.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WhatsHap/2.2-foss-2023a x x x x x x x x x WhatsHap/2.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/WhatsHap/#whatshap22-foss-2023a","title":"WhatsHap/2.2-foss-2023a","text":"

This is a list of extensions included in the module:

PuLP-2.8.0, whatshap-2.2, xopen-1.7.0

"},{"location":"available_software/detail/WhatsHap/#whatshap21-foss-2022b","title":"WhatsHap/2.1-foss-2022b","text":"

This is a list of extensions included in the module:

pulp-2.8.0, WhatsHap-2.1, xopen-1.7.0

"},{"location":"available_software/detail/X11/","title":"X11","text":"

The X Window System (X11) is a windowing system for bitmap displays

https://www.x.org

"},{"location":"available_software/detail/X11/#available-modules","title":"Available modules","text":"

The overview below shows which X11 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using X11, load one of these modules using a module load command like:

module load X11/20231019-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 X11/20231019-GCCcore-13.2.0 x x x x x x x x x X11/20230603-GCCcore-12.3.0 x x x x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/XML-LibXML/","title":"XML-LibXML","text":"

Perl binding for libxml2

https://metacpan.org/pod/distribution/XML-LibXML/LibXML.pod

"},{"location":"available_software/detail/XML-LibXML/#available-modules","title":"Available modules","text":"

The overview below shows which XML-LibXML installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using XML-LibXML, load one of these modules using a module load command like:

module load XML-LibXML/2.0209-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 XML-LibXML/2.0209-GCCcore-12.3.0 x x x x x x x x x XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/XML-LibXML/#xml-libxml20209-gcccore-1230","title":"XML-LibXML/2.0209-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Alien::Base-2.80, Alien::Build::Plugin::Download::GitLab-0.01, Alien::Libxml2-0.19, File::chdir-0.1011, XML::LibXML-2.0209

"},{"location":"available_software/detail/XML-LibXML/#xml-libxml20208-gcccore-1220","title":"XML-LibXML/2.0208-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Alien::Base-2.80, Alien::Build::Plugin::Download::GitLab-0.01, Alien::Libxml2-0.19, File::chdir-0.1011, XML::LibXML-2.0208

"},{"location":"available_software/detail/Xerces-C%2B%2B/","title":"Xerces-C++","text":"

Xerces-C++ is a validating XML parser written in a portablesubset of C++. Xerces-C++ makes it easy to give your application the ability toread and write XML data. A shared library is provided for parsing, generating,manipulating, and validating XML documents using the DOM, SAX, and SAX2APIs.

https://xerces.apache.org/xerces-c/

"},{"location":"available_software/detail/Xerces-C%2B%2B/#available-modules","title":"Available modules","text":"

The overview below shows which Xerces-C++ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Xerces-C++, load one of these modules using a module load command like:

module load Xerces-C++/3.2.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xerces-C++/3.2.5-GCCcore-13.2.0 x x x x x x x x x Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Xvfb/","title":"Xvfb","text":"

Xvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.

https://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml

"},{"location":"available_software/detail/Xvfb/#available-modules","title":"Available modules","text":"

The overview below shows which Xvfb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Xvfb, load one of these modules using a module load command like:

module load Xvfb/21.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xvfb/21.1.9-GCCcore-13.2.0 x x x x x x x x x Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/YODA/","title":"YODA","text":"

Yet more Objects for (High Energy Physics) Data Analysis

https://yoda.hepforge.org/

"},{"location":"available_software/detail/YODA/#available-modules","title":"Available modules","text":"

The overview below shows which YODA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using YODA, load one of these modules using a module load command like:

module load YODA/1.9.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 YODA/1.9.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Yasm/","title":"Yasm","text":"

Yasm: Complete rewrite of the NASM assembler with BSD license

https://www.tortall.net/projects/yasm/

"},{"location":"available_software/detail/Yasm/#available-modules","title":"Available modules","text":"

The overview below shows which Yasm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Yasm, load one of these modules using a module load command like:

module load Yasm/1.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Yasm/1.3.0-GCCcore-13.2.0 - - - x x x x x x Yasm/1.3.0-GCCcore-12.3.0 - - - x x x x x x Yasm/1.3.0-GCCcore-12.2.0 - - - x x x - x x"},{"location":"available_software/detail/Z3/","title":"Z3","text":"

Z3 is a theorem prover from Microsoft Research with support for bitvectors,booleans, arrays, floating point numbers, strings, and other data types. Thismodule includes z3-solver, the Python interface of Z3.

https://github.com/Z3Prover/z3

"},{"location":"available_software/detail/Z3/#available-modules","title":"Available modules","text":"

The overview below shows which Z3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Z3, load one of these modules using a module load command like:

module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x - x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x x x x Z3/4.12.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230-python-3113","title":"Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3","text":"

This is a list of extensions included in the module:

z3-solver-4.12.2.0

"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230","title":"Z3/4.12.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

z3-solver-4.12.2.0

"},{"location":"available_software/detail/ZeroMQ/","title":"ZeroMQ","text":"

ZeroMQ looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems.

https://www.zeromq.org/

"},{"location":"available_software/detail/ZeroMQ/#available-modules","title":"Available modules","text":"

The overview below shows which ZeroMQ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ZeroMQ, load one of these modules using a module load command like:

module load ZeroMQ/4.3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ZeroMQ/4.3.5-GCCcore-13.2.0 x x x x x x x x x ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Zip/","title":"Zip","text":"

Zip is a compression and file packaging/archive utility.Although highly compatible both with PKWARE's PKZIP and PKUNZIPutilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectiveshave been portability and other-than-MSDOS functionality

http://www.info-zip.org/Zip.html

"},{"location":"available_software/detail/Zip/#available-modules","title":"Available modules","text":"

The overview below shows which Zip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Zip, load one of these modules using a module load command like:

module load Zip/3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Zip/3.0-GCCcore-12.3.0 x x x x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/amdahl/","title":"amdahl","text":"

This Python module contains a pseudo-application that can be used as a blackbox to reproduce Amdahl's Law. It does not do real calculations, nor any realcommunication, so can easily be overloaded.

https://github.com/hpc-carpentry/amdahl

"},{"location":"available_software/detail/amdahl/#available-modules","title":"Available modules","text":"

The overview below shows which amdahl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using amdahl, load one of these modules using a module load command like:

module load amdahl/0.3.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 amdahl/0.3.1-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/archspec/","title":"archspec","text":"

A library for detecting, labeling, and reasoning about microarchitectures

https://github.com/archspec/archspec

"},{"location":"available_software/detail/archspec/#available-modules","title":"Available modules","text":"

The overview below shows which archspec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using archspec, load one of these modules using a module load command like:

module load archspec/0.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 archspec/0.2.2-GCCcore-13.2.0 x x x x x x x x x archspec/0.2.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/arpack-ng/","title":"arpack-ng","text":"

ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.

https://github.com/opencollab/arpack-ng

"},{"location":"available_software/detail/arpack-ng/#available-modules","title":"Available modules","text":"

The overview below shows which arpack-ng installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using arpack-ng, load one of these modules using a module load command like:

module load arpack-ng/3.9.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 arpack-ng/3.9.0-foss-2023b x x x x x x x x x arpack-ng/3.9.0-foss-2023a x x x x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/arrow-R/","title":"arrow-R","text":"

R interface to the Apache Arrow C++ library

https://cran.r-project.org/web/packages/arrow

"},{"location":"available_software/detail/arrow-R/#available-modules","title":"Available modules","text":"

The overview below shows which arrow-R installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using arrow-R, load one of these modules using a module load command like:

module load arrow-R/14.0.1-foss-2023a-R-4.3.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 arrow-R/14.0.1-foss-2023a-R-4.3.2 x x x x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x - x x"},{"location":"available_software/detail/at-spi2-atk/","title":"at-spi2-atk","text":"

AT-SPI 2 toolkit bridge

https://wiki.gnome.org/Accessibility

"},{"location":"available_software/detail/at-spi2-atk/#available-modules","title":"Available modules","text":"

The overview below shows which at-spi2-atk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using at-spi2-atk, load one of these modules using a module load command like:

module load at-spi2-atk/2.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-atk/2.38.0-GCCcore-13.2.0 x x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/at-spi2-core/","title":"at-spi2-core","text":"

Assistive Technology Service Provider Interface.

https://wiki.gnome.org/Accessibility

"},{"location":"available_software/detail/at-spi2-core/#available-modules","title":"Available modules","text":"

The overview below shows which at-spi2-core installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using at-spi2-core, load one of these modules using a module load command like:

module load at-spi2-core/2.50.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-core/2.50.0-GCCcore-13.2.0 x x x x x x x x x at-spi2-core/2.49.91-GCCcore-12.3.0 x x x x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/basemap/","title":"basemap","text":"

The matplotlib basemap toolkit is a library for plotting 2D data on maps in Python

https://matplotlib.org/basemap/

"},{"location":"available_software/detail/basemap/#available-modules","title":"Available modules","text":"

The overview below shows which basemap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using basemap, load one of these modules using a module load command like:

module load basemap/1.3.9-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 basemap/1.3.9-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/basemap/#basemap139-foss-2023a","title":"basemap/1.3.9-foss-2023a","text":"

This is a list of extensions included in the module:

basemap-1.3.9, basemap_data-1.3.9, pyshp-2.3.1

"},{"location":"available_software/detail/bokeh/","title":"bokeh","text":"

Statistical and novel interactive HTML plots for Python

https://github.com/bokeh/bokeh

"},{"location":"available_software/detail/bokeh/#available-modules","title":"Available modules","text":"

The overview below shows which bokeh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using bokeh, load one of these modules using a module load command like:

module load bokeh/3.2.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 bokeh/3.2.2-foss-2023a x x x x x x x x x bokeh/3.2.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/bokeh/#bokeh322-foss-2023a","title":"bokeh/3.2.2-foss-2023a","text":"

This is a list of extensions included in the module:

bokeh-3.2.2, contourpy-1.0.7, xyzservices-2023.7.0

"},{"location":"available_software/detail/bokeh/#bokeh321-foss-2022b","title":"bokeh/3.2.1-foss-2022b","text":"

This is a list of extensions included in the module:

bokeh-3.2.1, contourpy-1.0.7, tornado-6.3.2, xyzservices-2023.7.0

"},{"location":"available_software/detail/cURL/","title":"cURL","text":"

libcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.

https://curl.haxx.se

"},{"location":"available_software/detail/cURL/#available-modules","title":"Available modules","text":"

The overview below shows which cURL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cURL, load one of these modules using a module load command like:

module load cURL/8.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cURL/8.3.0-GCCcore-13.2.0 x x x x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/cairo/","title":"cairo","text":"

Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB

https://cairographics.org

"},{"location":"available_software/detail/cairo/#available-modules","title":"Available modules","text":"

The overview below shows which cairo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cairo, load one of these modules using a module load command like:

module load cairo/1.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cairo/1.18.0-GCCcore-13.2.0 x x x x x x x x x cairo/1.17.8-GCCcore-12.3.0 x x x x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/casacore/","title":"casacore","text":"

A suite of C++ libraries for radio astronomy data processing.The ephemerides data needs to be in DATA_DIR and the location must be specified at runtime.Thus user's can update them.

https://github.com/casacore/casacore

"},{"location":"available_software/detail/casacore/#available-modules","title":"Available modules","text":"

The overview below shows which casacore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using casacore, load one of these modules using a module load command like:

module load casacore/3.5.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 casacore/3.5.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/ccache/","title":"ccache","text":"

Ccache (or \u201cccache\u201d) is a compiler cache. It speeds up recompilation bycaching previous compilations and detecting when the same compilation is being done again

https://ccache.dev/

"},{"location":"available_software/detail/ccache/#available-modules","title":"Available modules","text":"

The overview below shows which ccache installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ccache, load one of these modules using a module load command like:

module load ccache/4.9-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ccache/4.9-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cffi/","title":"cffi","text":"

C Foreign Function Interface for Python. Interact with almost any C code fromPython, based on C-like declarations that you can often copy-paste from headerfiles or documentation.

https://cffi.readthedocs.io/en/latest/

"},{"location":"available_software/detail/cffi/#available-modules","title":"Available modules","text":"

The overview below shows which cffi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cffi, load one of these modules using a module load command like:

module load cffi/1.15.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cffi/1.15.1-GCCcore-13.2.0 x x x x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1320","title":"cffi/1.15.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

cffi-1.15.1, pycparser-2.21

"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1230","title":"cffi/1.15.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

cffi-1.15.1, pycparser-2.21

"},{"location":"available_software/detail/cimfomfa/","title":"cimfomfa","text":"

This library supports both MCL, a cluster algorithm for graphs, and zoem, amacro/DSL language. It supplies abstractions for memory management, I/O,associative arrays, strings, heaps, and a few other things. The string libraryhas had heavy testing as part of zoem. Both understandably and regrettably Ichose long ago to make it C-string-compatible, hence nul bytes may not be partof a string. At some point I hope to rectify this, perhaps unrealistically.

https://github.com/micans/cimfomfa

"},{"location":"available_software/detail/cimfomfa/#available-modules","title":"Available modules","text":"

The overview below shows which cimfomfa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cimfomfa, load one of these modules using a module load command like:

module load cimfomfa/22.273-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cimfomfa/22.273-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/colorize/","title":"colorize","text":"

Ruby gem for colorizing text using ANSI escape sequences.Extends String class or add a ColorizedString with methods to set the text color, background color and text effects.

https://github.com/fazibear/colorize

"},{"location":"available_software/detail/colorize/#available-modules","title":"Available modules","text":"

The overview below shows which colorize installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using colorize, load one of these modules using a module load command like:

module load colorize/0.7.7-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 colorize/0.7.7-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cooler/","title":"cooler","text":"

Cooler is a support library for a storage format, also called cooler, used to store genomic interaction data of any size, such as Hi-C contact matrices.

https://open2c.github.io/cooler

"},{"location":"available_software/detail/cooler/#available-modules","title":"Available modules","text":"

The overview below shows which cooler installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cooler, load one of these modules using a module load command like:

module load cooler/0.10.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cooler/0.10.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/cooler/#cooler0102-foss-2023b","title":"cooler/0.10.2-foss-2023b","text":"

This is a list of extensions included in the module:

asciitree-0.3.3, cooler-0.10.2, cytoolz-1.0.0, toolz-1.0.0

"},{"location":"available_software/detail/cpio/","title":"cpio","text":"

The cpio package contains tools for archiving.

https://savannah.gnu.org/projects/cpio/

"},{"location":"available_software/detail/cpio/#available-modules","title":"Available modules","text":"

The overview below shows which cpio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cpio, load one of these modules using a module load command like:

module load cpio/2.15-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cpio/2.15-GCCcore-12.3.0 x x x x x x x x x cpio/2.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/cppy/","title":"cppy","text":"

A small C++ header library which makes it easier to writePython extension modules. The primary feature is a PyObject smart pointerwhich automatically handles reference counting and provides conveniencemethods for performing common object operations.

https://github.com/nucleic/cppy

"},{"location":"available_software/detail/cppy/#available-modules","title":"Available modules","text":"

The overview below shows which cppy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cppy, load one of these modules using a module load command like:

module load cppy/1.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cppy/1.2.1-GCCcore-13.2.0 x x x x x x x x x cppy/1.2.1-GCCcore-12.3.0 x x x x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/crb-blast/","title":"crb-blast","text":"

Conditional Reciprocal Best BLAST - high confidence ortholog assignment.

https://github.com/cboursnell/crb-blast

"},{"location":"available_software/detail/crb-blast/#available-modules","title":"Available modules","text":"

The overview below shows which crb-blast installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using crb-blast, load one of these modules using a module load command like:

module load crb-blast/0.6.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 crb-blast/0.6.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/crb-blast/#crb-blast069-gcc-1230","title":"crb-blast/0.6.9-GCC-12.3.0","text":"

This is a list of extensions included in the module:

bindeps-1.2.1, bio-1.6.0.pre.20181210, crb-blast-0.6.9, facade-1.2.1, fixwhich-1.0.2, pathname2-1.8.4, threach-0.2.0, trollop-2.9.10

"},{"location":"available_software/detail/cryptography/","title":"cryptography","text":"

cryptography is a package designed to expose cryptographic primitives and recipes to Python developers.

https://github.com/pyca/cryptography

"},{"location":"available_software/detail/cryptography/#available-modules","title":"Available modules","text":"

The overview below shows which cryptography installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cryptography, load one of these modules using a module load command like:

module load cryptography/41.0.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cryptography/41.0.5-GCCcore-13.2.0 x x x x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/dask/","title":"dask","text":"

Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.

https://dask.org/

"},{"location":"available_software/detail/dask/#available-modules","title":"Available modules","text":"

The overview below shows which dask installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dask, load one of these modules using a module load command like:

module load dask/2023.9.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dask/2023.9.2-foss-2023a x x x x x x x x x dask/2023.7.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/dask/#dask202392-foss-2023a","title":"dask/2023.9.2-foss-2023a","text":"

This is a list of extensions included in the module:

dask-2023.9.2, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.9.2, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, zict-3.0.0

"},{"location":"available_software/detail/dask/#dask202371-foss-2022b","title":"dask/2023.7.1-foss-2022b","text":"

This is a list of extensions included in the module:

dask-2023.7.1, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.7.1, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, versioneer-0.29, zict-3.0.0

"},{"location":"available_software/detail/dill/","title":"dill","text":"

dill extends python's pickle module for serializing and de-serializing python objects to the majority of the built-in python types. Serialization is the process of converting an object to a byte stream, and the inverse of which is converting a byte stream back to on python object hierarchy.

https://pypi.org/project/dill/

"},{"location":"available_software/detail/dill/#available-modules","title":"Available modules","text":"

The overview below shows which dill installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dill, load one of these modules using a module load command like:

module load dill/0.3.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dill/0.3.8-GCCcore-13.2.0 x x x x x x x x x dill/0.3.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/dlb/","title":"dlb","text":"

DLB is a dynamic library designed to speed up HPC hybrid applications (i.e.,two levels of parallelism) by improving the load balance of the outer level ofparallelism (e.g., MPI) by dynamically redistributing the computationalresources at the inner level of parallelism (e.g., OpenMP). at run time.

https://pm.bsc.es/dlb/

"},{"location":"available_software/detail/dlb/#available-modules","title":"Available modules","text":"

The overview below shows which dlb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dlb, load one of these modules using a module load command like:

module load dlb/3.4-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dlb/3.4-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/double-conversion/","title":"double-conversion","text":"

Efficient binary-decimal and decimal-binary conversion routines for IEEE doubles.

https://github.com/google/double-conversion

"},{"location":"available_software/detail/double-conversion/#available-modules","title":"Available modules","text":"

The overview below shows which double-conversion installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using double-conversion, load one of these modules using a module load command like:

module load double-conversion/3.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 double-conversion/3.3.0-GCCcore-13.2.0 x x x x x x x x x double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ecBuild/","title":"ecBuild","text":"

A CMake-based build system, consisting of a collection of CMake macros andfunctions that ease the managing of software build systems

https://ecbuild.readthedocs.io/

"},{"location":"available_software/detail/ecBuild/#available-modules","title":"Available modules","text":"

The overview below shows which ecBuild installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ecBuild, load one of these modules using a module load command like:

module load ecBuild/3.8.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecBuild/3.8.0 x x x x x x x x x"},{"location":"available_software/detail/ecCodes/","title":"ecCodes","text":"

ecCodes is a package developed by ECMWF which provides an application programming interface and a set of tools for decoding and encoding messages in the following formats: WMO FM-92 GRIB edition 1 and edition 2, WMO FM-94 BUFR edition 3 and edition 4, WMO GTS abbreviated header (only decoding).

https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home

"},{"location":"available_software/detail/ecCodes/#available-modules","title":"Available modules","text":"

The overview below shows which ecCodes installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ecCodes, load one of these modules using a module load command like:

module load ecCodes/2.31.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecCodes/2.31.0-gompi-2023b x x x x x x x x x ecCodes/2.31.0-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/elfutils/","title":"elfutils","text":"

The elfutils project provides libraries and tools for ELF files and DWARF data.

https://elfutils.org/

"},{"location":"available_software/detail/elfutils/#available-modules","title":"Available modules","text":"

The overview below shows which elfutils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using elfutils, load one of these modules using a module load command like:

module load elfutils/0.190-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 elfutils/0.190-GCCcore-13.2.0 x x x x x x x x x elfutils/0.189-GCCcore-12.3.0 x x x x x x x x x elfutils/0.189-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/expat/","title":"expat","text":"

Expat is an XML parser library written in C. It is a stream-oriented parserin which an application registers handlers for things the parser might findin the XML document (like start tags).

https://libexpat.github.io

"},{"location":"available_software/detail/expat/#available-modules","title":"Available modules","text":"

The overview below shows which expat installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using expat, load one of these modules using a module load command like:

module load expat/2.5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 expat/2.5.0-GCCcore-13.2.0 x x x x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/expecttest/","title":"expecttest","text":"

This library implements expect tests (also known as \"golden\" tests). Expect tests are a method of writing tests where instead of hard-coding the expected output of a test, you run the test to get the output, and the test framework automatically populates the expected output. If the output of the test changes, you can rerun the test with the environment variable EXPECTTEST_ACCEPT=1 to automatically update the expected output.

https://github.com/ezyang/expecttest

"},{"location":"available_software/detail/expecttest/#available-modules","title":"Available modules","text":"

The overview below shows which expecttest installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using expecttest, load one of these modules using a module load command like:

module load expecttest/0.1.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 expecttest/0.1.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/f90wrap/","title":"f90wrap","text":"

f90wrap is a tool to automatically generate Python extension modules whichinterface to Fortran code that makes use of derived types. It builds on thecapabilities of the popular f2py utility by generating a simpler Fortran 90interface to the original Fortran code which is then suitable for wrapping withf2py, together with a higher-level Pythonic wrapper that makes the existance ofan additional layer transparent to the final user.

https://github.com/jameskermode/f90wrap

"},{"location":"available_software/detail/f90wrap/#available-modules","title":"Available modules","text":"

The overview below shows which f90wrap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using f90wrap, load one of these modules using a module load command like:

module load f90wrap/0.2.13-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 f90wrap/0.2.13-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/fastjet-contrib/","title":"fastjet-contrib","text":"

3rd party extensions of FastJet

https://fastjet.hepforge.org/contrib/

"},{"location":"available_software/detail/fastjet-contrib/#available-modules","title":"Available modules","text":"

The overview below shows which fastjet-contrib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastjet-contrib, load one of these modules using a module load command like:

module load fastjet-contrib/1.053-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet-contrib/1.053-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/fastjet/","title":"fastjet","text":"

A software package for jet finding in pp and e+e- collisions

https://fastjet.fr/

"},{"location":"available_software/detail/fastjet/#available-modules","title":"Available modules","text":"

The overview below shows which fastjet installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastjet, load one of these modules using a module load command like:

module load fastjet/3.4.2-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet/3.4.2-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/fastp/","title":"fastp","text":"

A tool designed to provide fast all-in-one preprocessing for FastQ files. This tool is developed in C++ with multithreading supported to afford high performance.

https://github.com/OpenGene/fastp

"},{"location":"available_software/detail/fastp/#available-modules","title":"Available modules","text":"

The overview below shows which fastp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastp, load one of these modules using a module load command like:

module load fastp/0.23.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastp/0.23.4-GCC-12.3.0 x x x x x x x x x fastp/0.23.4-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ffnvcodec/","title":"ffnvcodec","text":"

FFmpeg nvidia headers. Adds support for nvenc and nvdec. Requires Nvidia GPU and drivers to be present(picked up dynamically).

https://git.videolan.org/?p=ffmpeg/nv-codec-headers.git

"},{"location":"available_software/detail/ffnvcodec/#available-modules","title":"Available modules","text":"

The overview below shows which ffnvcodec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ffnvcodec, load one of these modules using a module load command like:

module load ffnvcodec/12.1.14.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ffnvcodec/12.1.14.0 x x x x x x x x x ffnvcodec/12.0.16.0 x x x x x x x x x ffnvcodec/11.1.5.2 x x x x x x - x x"},{"location":"available_software/detail/flatbuffers-python/","title":"flatbuffers-python","text":"

Python Flatbuffers runtime library.

https://github.com/google/flatbuffers/

"},{"location":"available_software/detail/flatbuffers-python/#available-modules","title":"Available modules","text":"

The overview below shows which flatbuffers-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flatbuffers-python, load one of these modules using a module load command like:

module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flatbuffers/","title":"flatbuffers","text":"

FlatBuffers: Memory Efficient Serialization Library

https://github.com/google/flatbuffers/

"},{"location":"available_software/detail/flatbuffers/#available-modules","title":"Available modules","text":"

The overview below shows which flatbuffers installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flatbuffers, load one of these modules using a module load command like:

module load flatbuffers/23.5.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flit/","title":"flit","text":"

A simple packaging tool for simple packages.

https://github.com/pypa/flit

"},{"location":"available_software/detail/flit/#available-modules","title":"Available modules","text":"

The overview below shows which flit installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flit, load one of these modules using a module load command like:

module load flit/3.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flit/3.9.0-GCCcore-13.2.0 x x x x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flit/#flit390-gcccore-1320","title":"flit/3.9.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

certifi-2023.7.22, charset-normalizer-3.3.1, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.2, requests-2.31.0, setuptools-scm-8.0.4, tomli_w-1.0.0, typing_extensions-4.8.0, urllib3-2.0.7

"},{"location":"available_software/detail/flit/#flit390-gcccore-1230","title":"flit/3.9.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

certifi-2023.5.7, charset-normalizer-3.1.0, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.1, requests-2.31.0, setuptools_scm-7.1.0, tomli_w-1.0.0, typing_extensions-4.6.3, urllib3-1.26.16

"},{"location":"available_software/detail/fontconfig/","title":"fontconfig","text":"

Fontconfig is a library designed to provide system-wide font configuration, customization and application access.

https://www.freedesktop.org/wiki/Software/fontconfig/

"},{"location":"available_software/detail/fontconfig/#available-modules","title":"Available modules","text":"

The overview below shows which fontconfig installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fontconfig, load one of these modules using a module load command like:

module load fontconfig/2.14.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fontconfig/2.14.2-GCCcore-13.2.0 x x x x x x x x x fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/foss/","title":"foss","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.

https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain

"},{"location":"available_software/detail/foss/#available-modules","title":"Available modules","text":"

The overview below shows which foss installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using foss, load one of these modules using a module load command like:

module load foss/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 foss/2023b x x x x x x x x x foss/2023a x x x x x x x x x foss/2022b x x x x x x - x x"},{"location":"available_software/detail/freeglut/","title":"freeglut","text":"

freeglut is a completely OpenSourced alternative to the OpenGL Utility Toolkit (GLUT) library.

http://freeglut.sourceforge.net/

"},{"location":"available_software/detail/freeglut/#available-modules","title":"Available modules","text":"

The overview below shows which freeglut installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using freeglut, load one of these modules using a module load command like:

module load freeglut/3.4.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 freeglut/3.4.0-GCCcore-12.3.0 x x x x x x x x x freeglut/3.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/freetype/","title":"freetype","text":"

FreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.

https://www.freetype.org

"},{"location":"available_software/detail/freetype/#available-modules","title":"Available modules","text":"

The overview below shows which freetype installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using freetype, load one of these modules using a module load command like:

module load freetype/2.13.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 freetype/2.13.2-GCCcore-13.2.0 x x x x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/geopandas/","title":"geopandas","text":"

GeoPandas is a project to add support for geographic data to pandas objects.It currently implements GeoSeries and GeoDataFrame types which are subclasses of pandas.Seriesand pandas.DataFrame respectively. GeoPandas objects can act on shapely geometry objects andperform geometric operations.

https://geopandas.org

"},{"location":"available_software/detail/geopandas/#available-modules","title":"Available modules","text":"

The overview below shows which geopandas installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using geopandas, load one of these modules using a module load command like:

module load geopandas/0.14.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 geopandas/0.14.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/geopandas/#geopandas0142-foss-2023a","title":"geopandas/0.14.2-foss-2023a","text":"

This is a list of extensions included in the module:

geopandas-0.14.2, mapclassify-2.6.1

"},{"location":"available_software/detail/gfbf/","title":"gfbf","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including FlexiBLAS (BLAS and LAPACK support) and (serial) FFTW.

(none)

"},{"location":"available_software/detail/gfbf/#available-modules","title":"Available modules","text":"

The overview below shows which gfbf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gfbf, load one of these modules using a module load command like:

module load gfbf/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gfbf/2023b x x x x x x x x x gfbf/2023a x x x x x x x x x gfbf/2022b x x x x x x - x x"},{"location":"available_software/detail/giflib/","title":"giflib","text":"

giflib is a library for reading and writing gif images.It is API and ABI compatible with libungif which was in wide use whilethe LZW compression algorithm was patented.

http://giflib.sourceforge.net/

"},{"location":"available_software/detail/giflib/#available-modules","title":"Available modules","text":"

The overview below shows which giflib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using giflib, load one of these modules using a module load command like:

module load giflib/5.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 giflib/5.2.1-GCCcore-13.2.0 x x x x x x x x x giflib/5.2.1-GCCcore-12.3.0 x x x x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/git/","title":"git","text":"

Git is a free and open source distributed version control system designedto handle everything from small to very large projects with speed and efficiency.

https://git-scm.com

"},{"location":"available_software/detail/git/#available-modules","title":"Available modules","text":"

The overview below shows which git installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using git, load one of these modules using a module load command like:

module load git/2.42.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 git/2.42.0-GCCcore-13.2.0 x x x x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x - x x"},{"location":"available_software/detail/gmpy2/","title":"gmpy2","text":"

GMP/MPIR, MPFR, and MPC interface to Python 2.6+ and 3.x

https://github.com/aleaxit/gmpy

"},{"location":"available_software/detail/gmpy2/#available-modules","title":"Available modules","text":"

The overview below shows which gmpy2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gmpy2, load one of these modules using a module load command like:

module load gmpy2/2.1.5-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmpy2/2.1.5-GCC-13.2.0 x x x x x x x x x gmpy2/2.1.5-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/gmsh/","title":"gmsh","text":"

Gmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.

https://gmsh.info/

"},{"location":"available_software/detail/gmsh/#available-modules","title":"Available modules","text":"

The overview below shows which gmsh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gmsh, load one of these modules using a module load command like:

module load gmsh/4.12.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmsh/4.12.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/gnuplot/","title":"gnuplot","text":"

Portable interactive, function plotting utility

http://gnuplot.sourceforge.net

"},{"location":"available_software/detail/gnuplot/#available-modules","title":"Available modules","text":"

The overview below shows which gnuplot installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gnuplot, load one of these modules using a module load command like:

module load gnuplot/5.4.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x x x x gnuplot/5.4.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/gompi/","title":"gompi","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.

(none)

"},{"location":"available_software/detail/gompi/#available-modules","title":"Available modules","text":"

The overview below shows which gompi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gompi, load one of these modules using a module load command like:

module load gompi/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gompi/2023b x x x x x x x x x gompi/2023a x x x x x x x x x gompi/2022b x x x x x x - x x"},{"location":"available_software/detail/googletest/","title":"googletest","text":"

Google's framework for writing C++ tests on a variety of platforms

https://github.com/google/googletest

"},{"location":"available_software/detail/googletest/#available-modules","title":"Available modules","text":"

The overview below shows which googletest installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using googletest, load one of these modules using a module load command like:

module load googletest/1.14.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 googletest/1.14.0-GCCcore-13.2.0 x x x x x x x x x googletest/1.13.0-GCCcore-12.3.0 x x x x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/graphite2/","title":"graphite2","text":"

Graphite is a \"smart font\" system developed specifically to handle the complexities of lesser-known languages of the world.

https://scripts.sil.org/cms/scripts/page.php?site_id=projects&item_id=graphite_home

"},{"location":"available_software/detail/graphite2/#available-modules","title":"Available modules","text":"

The overview below shows which graphite2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using graphite2, load one of these modules using a module load command like:

module load graphite2/1.3.14-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 graphite2/1.3.14-GCCcore-13.2.0 x x x x x x x x x graphite2/1.3.14-GCCcore-12.3.0 x x x x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/groff/","title":"groff","text":"

Groff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.

https://www.gnu.org/software/groff

"},{"location":"available_software/detail/groff/#available-modules","title":"Available modules","text":"

The overview below shows which groff installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using groff, load one of these modules using a module load command like:

module load groff/1.22.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 groff/1.22.4-GCCcore-12.3.0 x x x x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/grpcio/","title":"grpcio","text":"

gRPC is a modern, open source, high-performance remote procedure call (RPC)framework that can run anywhere. gRPC enables client and server applications tocommunicate transparently, and simplifies the building of connected systems.

https://grpc.io/

"},{"location":"available_software/detail/grpcio/#available-modules","title":"Available modules","text":"

The overview below shows which grpcio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using grpcio, load one of these modules using a module load command like:

module load grpcio/1.57.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 grpcio/1.57.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/grpcio/#grpcio1570-gcccore-1230","title":"grpcio/1.57.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

grpcio-1.57.0

"},{"location":"available_software/detail/gtk-doc/","title":"gtk-doc","text":"

Documentation tool for public library API

https://gitlab.gnome.org/GNOME/gtk-doc

"},{"location":"available_software/detail/gtk-doc/#available-modules","title":"Available modules","text":"

The overview below shows which gtk-doc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gtk-doc, load one of these modules using a module load command like:

module load gtk-doc/1.34.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gtk-doc/1.34.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/gzip/","title":"gzip","text":"

gzip (GNU zip) is a popular data compression program as a replacement for compress

https://www.gnu.org/software/gzip/

"},{"location":"available_software/detail/gzip/#available-modules","title":"Available modules","text":"

The overview below shows which gzip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gzip, load one of these modules using a module load command like:

module load gzip/1.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gzip/1.13-GCCcore-13.2.0 x x x x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/h5netcdf/","title":"h5netcdf","text":"

A Python interface for the netCDF4 file-format that reads and writes local orremote HDF5 files directly via h5py or h5pyd, without relying on the UnidatanetCDF library.

https://h5netcdf.org/

"},{"location":"available_software/detail/h5netcdf/#available-modules","title":"Available modules","text":"

The overview below shows which h5netcdf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using h5netcdf, load one of these modules using a module load command like:

module load h5netcdf/1.2.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5netcdf/1.2.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/h5netcdf/#h5netcdf120-foss-2023a","title":"h5netcdf/1.2.0-foss-2023a","text":"

This is a list of extensions included in the module:

h5netcdf-1.2.0

"},{"location":"available_software/detail/h5py/","title":"h5py","text":"

HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.

https://www.h5py.org/

"},{"location":"available_software/detail/h5py/#available-modules","title":"Available modules","text":"

The overview below shows which h5py installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using h5py, load one of these modules using a module load command like:

module load h5py/3.11.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5py/3.11.0-foss-2023b x x x x x x x x x h5py/3.9.0-foss-2023a x x x x x x x x x h5py/3.8.0-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/hatch-jupyter-builder/","title":"hatch-jupyter-builder","text":"

Hatch Jupyter Builder is a plugin for the hatchling Python build backend. It isprimarily targeted for package authors who are providing JavaScript as part oftheir Python packages.Typical use cases are Jupyter Lab Extensions and Jupyter Widgets.

https://hatch-jupyter-builder.readthedocs.io

"},{"location":"available_software/detail/hatch-jupyter-builder/#available-modules","title":"Available modules","text":"

The overview below shows which hatch-jupyter-builder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hatch-jupyter-builder, load one of these modules using a module load command like:

module load hatch-jupyter-builder/0.9.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatch-jupyter-builder/0.9.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hatch-jupyter-builder/#hatch-jupyter-builder091-gcccore-1230","title":"hatch-jupyter-builder/0.9.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

hatch_jupyter_builder-0.9.1, hatch_nodejs_version-0.3.2

"},{"location":"available_software/detail/hatchling/","title":"hatchling","text":"

Extensible, standards compliant build backend used by Hatch,a modern, extensible Python project manager.

https://hatch.pypa.io

"},{"location":"available_software/detail/hatchling/#available-modules","title":"Available modules","text":"

The overview below shows which hatchling installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hatchling, load one of these modules using a module load command like:

module load hatchling/1.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatchling/1.18.0-GCCcore-13.2.0 x x x x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1320","title":"hatchling/1.18.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

editables-0.5, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, pathspec-0.11.2, pluggy-1.3.0, trove_classifiers-2023.10.18

"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1230","title":"hatchling/1.18.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

editables-0.3, hatch-requirements-txt-0.4.1, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, pathspec-0.11.1, pluggy-1.2.0, trove_classifiers-2023.5.24

"},{"location":"available_software/detail/hic-straw/","title":"hic-straw","text":"

Straw is a library which allows rapid streaming of contact data from .hic files.

https://github.com/aidenlab/straw

"},{"location":"available_software/detail/hic-straw/#available-modules","title":"Available modules","text":"

The overview below shows which hic-straw installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hic-straw, load one of these modules using a module load command like:

module load hic-straw/1.3.1-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hic-straw/1.3.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/hiredis/","title":"hiredis","text":"

Hiredis is a minimalistic C client library for the Redis database.It is minimalistic because it just adds minimal support for the protocol,but at the same time it uses a high level printf-alike API in order tomake it much higher level than otherwise suggested by its minimal code baseand the lack of explicit bindings for every Redis command.

https://github.com/redis/hiredis

"},{"location":"available_software/detail/hiredis/#available-modules","title":"Available modules","text":"

The overview below shows which hiredis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hiredis, load one of these modules using a module load command like:

module load hiredis/1.2.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hiredis/1.2.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hwloc/","title":"hwloc","text":"

The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.

https://www.open-mpi.org/projects/hwloc/

"},{"location":"available_software/detail/hwloc/#available-modules","title":"Available modules","text":"

The overview below shows which hwloc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hwloc, load one of these modules using a module load command like:

module load hwloc/2.9.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hwloc/2.9.2-GCCcore-13.2.0 x x x x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/hypothesis/","title":"hypothesis","text":"

Hypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.

https://github.com/HypothesisWorks/hypothesis

"},{"location":"available_software/detail/hypothesis/#available-modules","title":"Available modules","text":"

The overview below shows which hypothesis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hypothesis, load one of these modules using a module load command like:

module load hypothesis/6.90.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ipympl/","title":"ipympl","text":"

Leveraging the Jupyter interactive widgets framework, ipympl enables theinteractive features of matplotlib in the Jupyter notebook and in JupyterLab.Besides, the figure canvas element is a proper Jupyter interactive widget whichcan be positioned in interactive widget layouts.

https://matplotlib.org/ipympl

"},{"location":"available_software/detail/ipympl/#available-modules","title":"Available modules","text":"

The overview below shows which ipympl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ipympl, load one of these modules using a module load command like:

module load ipympl/0.9.3-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ipympl/0.9.3-gfbf-2023a x x x x x x x x x ipympl/0.9.3-foss-2023a x x x x x x - x x"},{"location":"available_software/detail/ipympl/#ipympl093-gfbf-2023a","title":"ipympl/0.9.3-gfbf-2023a","text":"

This is a list of extensions included in the module:

ipympl-0.9.3

"},{"location":"available_software/detail/ipympl/#ipympl093-foss-2023a","title":"ipympl/0.9.3-foss-2023a","text":"

This is a list of extensions included in the module:

ipympl-0.9.3

"},{"location":"available_software/detail/jbigkit/","title":"jbigkit","text":"

JBIG-KIT is a software implementation of the JBIG1 data compression standard (ITU-T T.82), which was designed for bi-level image data, such as scanned documents.

https://www.cl.cam.ac.uk/~mgk25/jbigkit/

"},{"location":"available_software/detail/jbigkit/#available-modules","title":"Available modules","text":"

The overview below shows which jbigkit installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jbigkit, load one of these modules using a module load command like:

module load jbigkit/2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jbigkit/2.1-GCCcore-13.2.0 x x x x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jedi/","title":"jedi","text":"

Jedi - an awesome autocompletion, static analysis and refactoring library for Python.

https://github.com/davidhalter/jedi

"},{"location":"available_software/detail/jedi/#available-modules","title":"Available modules","text":"

The overview below shows which jedi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jedi, load one of these modules using a module load command like:

module load jedi/0.19.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jedi/0.19.1-GCCcore-13.2.0 x x x x x x x x x jedi/0.19.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/jedi/#jedi0191-gcccore-1320","title":"jedi/0.19.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

jedi-0.19.1, parso-0.8.3

"},{"location":"available_software/detail/jedi/#jedi0190-gcccore-1230","title":"jedi/0.19.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

jedi-0.19.0, parso-0.8.3

"},{"location":"available_software/detail/jemalloc/","title":"jemalloc","text":"

jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.

http://jemalloc.net

"},{"location":"available_software/detail/jemalloc/#available-modules","title":"Available modules","text":"

The overview below shows which jemalloc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jemalloc, load one of these modules using a module load command like:

module load jemalloc/5.3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jemalloc/5.3.0-GCCcore-12.3.0 x x x x x x x x x jemalloc/5.3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jq/","title":"jq","text":"

jq is a lightweight and flexible command-line JSON processor.

https://stedolan.github.io/jq/

"},{"location":"available_software/detail/jq/#available-modules","title":"Available modules","text":"

The overview below shows which jq installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jq, load one of these modules using a module load command like:

module load jq/1.6-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jq/1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/json-c/","title":"json-c","text":"

JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C, output them as JSON formatted strings and parse JSON formatted strings back into the C representation of JSONobjects.

https://github.com/json-c/json-c

"},{"location":"available_software/detail/json-c/#available-modules","title":"Available modules","text":"

The overview below shows which json-c installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using json-c, load one of these modules using a module load command like:

module load json-c/0.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 json-c/0.17-GCCcore-13.2.0 x x x x x x x x x json-c/0.16-GCCcore-12.3.0 x x x x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jupyter-server/","title":"jupyter-server","text":"

The Jupyter Server provides the backend (i.e. the core services, APIs, and RESTendpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, andVoila.

https://jupyter.org/

"},{"location":"available_software/detail/jupyter-server/#available-modules","title":"Available modules","text":"

The overview below shows which jupyter-server installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jupyter-server, load one of these modules using a module load command like:

module load jupyter-server/2.7.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/jupyter-server/#jupyter-server272-gcccore-1230","title":"jupyter-server/2.7.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

anyio-3.7.1, argon2-cffi-bindings-21.2.0, argon2_cffi-23.1.0, arrow-1.2.3, bleach-6.0.0, comm-0.1.4, debugpy-1.6.7.post1, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.18.0, hatch_jupyter_builder-0.8.3, hatch_nodejs_version-0.3.1, ipykernel-6.25.1, ipython_genutils-0.2.0, ipywidgets-8.1.0, jsonschema-4.18.0, jsonschema_specifications-2023.7.1, jupyter_client-8.3.0, jupyter_core-5.3.1, jupyter_events-0.7.0, jupyter_packaging-0.12.3, jupyter_server-2.7.2, jupyter_server_terminals-0.4.4, jupyterlab_pygments-0.2.2, jupyterlab_widgets-3.0.8, mistune-3.0.1, nbclient-0.8.0, nbconvert-7.7.4, nbformat-5.9.2, nest_asyncio-1.5.7, notebook_shim-0.2.3, overrides-7.4.0, pandocfilters-1.5.0, prometheus_client-0.17.1, python-json-logger-2.0.7, referencing-0.30.2, rfc3339_validator-0.1.4, rfc3986_validator-0.1.1, rpds_py-0.9.2, Send2Trash-1.8.2, sniffio-1.3.0, terminado-0.17.1, tinycss2-1.2.1, websocket-client-1.6.1, widgetsnbextension-4.0.8

"},{"location":"available_software/detail/kim-api/","title":"kim-api","text":"

Open Knowledgebase of Interatomic Models.KIM is an API and OpenKIM is a collection of interatomic models (potentials) foratomistic simulations. This is a library that can be used by simulation programsto get access to the models in the OpenKIM database.This EasyBuild only installs the API, the models can be installed with thepackage openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAMEor kim-api-collections-management install user OpenKIMto install them all.

https://openkim.org/

"},{"location":"available_software/detail/kim-api/#available-modules","title":"Available modules","text":"

The overview below shows which kim-api installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using kim-api, load one of these modules using a module load command like:

module load kim-api/2.3.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 kim-api/2.3.0-GCC-13.2.0 x x x x x x x x x kim-api/2.3.0-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libGLU/","title":"libGLU","text":"

The OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.

https://mesa.freedesktop.org/archive/glu/

"},{"location":"available_software/detail/libGLU/#available-modules","title":"Available modules","text":"

The overview below shows which libGLU installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libGLU, load one of these modules using a module load command like:

module load libGLU/9.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libGLU/9.0.3-GCCcore-13.2.0 x x x x x x x x x libGLU/9.0.3-GCCcore-12.3.0 x x x x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libaec/","title":"libaec","text":"

Libaec provides fast lossless compression of 1 up to 32 bit wide signed or unsigned integers(samples). The library achieves best results for low entropy data as often encountered in space imaginginstrument data or numerical model output from weather or climate simulations. While floating point representationsare not directly supported, they can also be efficiently coded by grouping exponents and mantissa.

https://gitlab.dkrz.de/k202009/libaec

"},{"location":"available_software/detail/libaec/#available-modules","title":"Available modules","text":"

The overview below shows which libaec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libaec, load one of these modules using a module load command like:

module load libaec/1.0.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaec/1.0.6-GCCcore-13.2.0 x x x x x x x x x libaec/1.0.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libaio/","title":"libaio","text":"

Asynchronous input/output library that uses the kernels native interface.

https://pagure.io/libaio

"},{"location":"available_software/detail/libaio/#available-modules","title":"Available modules","text":"

The overview below shows which libaio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libaio, load one of these modules using a module load command like:

module load libaio/0.3.113-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaio/0.3.113-GCCcore-12.3.0 x x x x x x x x x libaio/0.3.113-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libarchive/","title":"libarchive","text":"

Multi-format archive and compression library

https://www.libarchive.org/

"},{"location":"available_software/detail/libarchive/#available-modules","title":"Available modules","text":"

The overview below shows which libarchive installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libarchive, load one of these modules using a module load command like:

module load libarchive/3.7.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libarchive/3.7.2-GCCcore-13.2.0 x x x x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libcerf/","title":"libcerf","text":"

libcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.

https://jugit.fz-juelich.de/mlz/libcerf

"},{"location":"available_software/detail/libcerf/#available-modules","title":"Available modules","text":"

The overview below shows which libcerf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libcerf, load one of these modules using a module load command like:

module load libcerf/2.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcerf/2.3-GCCcore-12.3.0 x x x x x x x x x libcerf/2.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libcint/","title":"libcint","text":"

libcint is an open source library for analytical Gaussian integrals.

https://github.com/sunqm/libcint

"},{"location":"available_software/detail/libcint/#available-modules","title":"Available modules","text":"

The overview below shows which libcint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libcint, load one of these modules using a module load command like:

module load libcint/5.4.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcint/5.4.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/libdeflate/","title":"libdeflate","text":"

Heavily optimized library for DEFLATE/zlib/gzip compression and decompression.

https://github.com/ebiggers/libdeflate

"},{"location":"available_software/detail/libdeflate/#available-modules","title":"Available modules","text":"

The overview below shows which libdeflate installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdeflate, load one of these modules using a module load command like:

module load libdeflate/1.19-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdeflate/1.19-GCCcore-13.2.0 x x x x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libdrm/","title":"libdrm","text":"

Direct Rendering Manager runtime library.

https://dri.freedesktop.org

"},{"location":"available_software/detail/libdrm/#available-modules","title":"Available modules","text":"

The overview below shows which libdrm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdrm, load one of these modules using a module load command like:

module load libdrm/2.4.117-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdrm/2.4.117-GCCcore-13.2.0 x x x x x x x x x libdrm/2.4.115-GCCcore-12.3.0 x x x x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libdwarf/","title":"libdwarf","text":"

The DWARF Debugging Information Format is of interest to programmers working on compilersand debuggers (and anyone interested in reading or writing DWARF information))

https://www.prevanders.net/dwarf.html

"},{"location":"available_software/detail/libdwarf/#available-modules","title":"Available modules","text":"

The overview below shows which libdwarf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdwarf, load one of these modules using a module load command like:

module load libdwarf/0.9.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdwarf/0.9.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libepoxy/","title":"libepoxy","text":"

Epoxy is a library for handling OpenGL function pointer management for you

https://github.com/anholt/libepoxy

"},{"location":"available_software/detail/libepoxy/#available-modules","title":"Available modules","text":"

The overview below shows which libepoxy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libepoxy, load one of these modules using a module load command like:

module load libepoxy/1.5.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libepoxy/1.5.10-GCCcore-13.2.0 x x x x x x x x x libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libevent/","title":"libevent","text":"

The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.

https://libevent.org/

"},{"location":"available_software/detail/libevent/#available-modules","title":"Available modules","text":"

The overview below shows which libevent installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libevent, load one of these modules using a module load command like:

module load libevent/2.1.12-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libevent/2.1.12-GCCcore-13.2.0 x x x x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libfabric/","title":"libfabric","text":"

Libfabric is a core component of OFI. It is the library that defines and exportsthe user-space API of OFI, and is typically the only software that applicationsdeal with directly. It works in conjunction with provider libraries, which areoften integrated directly into libfabric.

https://ofiwg.github.io/libfabric/

"},{"location":"available_software/detail/libfabric/#available-modules","title":"Available modules","text":"

The overview below shows which libfabric installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libfabric, load one of these modules using a module load command like:

module load libfabric/1.19.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libfabric/1.19.0-GCCcore-13.2.0 x x x x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libffi/","title":"libffi","text":"

The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.

https://sourceware.org/libffi/

"},{"location":"available_software/detail/libffi/#available-modules","title":"Available modules","text":"

The overview below shows which libffi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libffi, load one of these modules using a module load command like:

module load libffi/3.4.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libffi/3.4.4-GCCcore-13.2.0 x x x x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgcrypt/","title":"libgcrypt","text":"

Libgcrypt is a general purpose cryptographic library originally based on code from GnuPG

https://gnupg.org/related_software/libgcrypt/index.html

"},{"location":"available_software/detail/libgcrypt/#available-modules","title":"Available modules","text":"

The overview below shows which libgcrypt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgcrypt, load one of these modules using a module load command like:

module load libgcrypt/1.10.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgcrypt/1.10.3-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libgd/","title":"libgd","text":"

GD is an open source code library for the dynamic creation of images by programmers.

https://libgd.github.io

"},{"location":"available_software/detail/libgd/#available-modules","title":"Available modules","text":"

The overview below shows which libgd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgd, load one of these modules using a module load command like:

module load libgd/2.3.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgd/2.3.3-GCCcore-12.3.0 x x x x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgeotiff/","title":"libgeotiff","text":"

Library for reading and writing coordinate system information from/to GeoTIFF files

https://directory.fsf.org/wiki/Libgeotiff

"},{"location":"available_software/detail/libgeotiff/#available-modules","title":"Available modules","text":"

The overview below shows which libgeotiff installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgeotiff, load one of these modules using a module load command like:

module load libgeotiff/1.7.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgeotiff/1.7.3-GCCcore-13.2.0 x x x x x x x x x libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgit2/","title":"libgit2","text":"

libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrantlinkable library with a solid API, allowing you to write native speed custom Git applications in any languagewhich supports C bindings.

https://libgit2.org/

"},{"location":"available_software/detail/libgit2/#available-modules","title":"Available modules","text":"

The overview below shows which libgit2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgit2, load one of these modules using a module load command like:

module load libgit2/1.7.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgit2/1.7.2-GCCcore-13.2.0 x x x x x x x x x libgit2/1.7.1-GCCcore-12.3.0 x x x x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libglvnd/","title":"libglvnd","text":"

libglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.

https://gitlab.freedesktop.org/glvnd/libglvnd

"},{"location":"available_software/detail/libglvnd/#available-modules","title":"Available modules","text":"

The overview below shows which libglvnd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libglvnd, load one of these modules using a module load command like:

module load libglvnd/1.7.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libglvnd/1.7.0-GCCcore-13.2.0 x x x x x x x x x libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgpg-error/","title":"libgpg-error","text":"

Libgpg-error is a small library that defines common error values for all GnuPG components.

https://gnupg.org/related_software/libgpg-error/index.html

"},{"location":"available_software/detail/libgpg-error/#available-modules","title":"Available modules","text":"

The overview below shows which libgpg-error installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgpg-error, load one of these modules using a module load command like:

module load libgpg-error/1.48-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgpg-error/1.48-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libiconv/","title":"libiconv","text":"

Libiconv converts from one character encoding to another through Unicode conversion

https://www.gnu.org/software/libiconv

"},{"location":"available_software/detail/libiconv/#available-modules","title":"Available modules","text":"

The overview below shows which libiconv installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libiconv, load one of these modules using a module load command like:

module load libiconv/1.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libiconv/1.17-GCCcore-13.2.0 x x x x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libidn2/","title":"libidn2","text":"

Libidn2 implements the revised algorithm for internationalized domain names called IDNA2008/TR46.

http://www.gnu.org/software/libidn2

"},{"location":"available_software/detail/libidn2/#available-modules","title":"Available modules","text":"

The overview below shows which libidn2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libidn2, load one of these modules using a module load command like:

module load libidn2/2.3.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libidn2/2.3.7-GCCcore-12.3.0 x x x x x x x x x libidn2/2.3.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libjpeg-turbo/","title":"libjpeg-turbo","text":"

libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.

https://sourceforge.net/projects/libjpeg-turbo/

"},{"location":"available_software/detail/libjpeg-turbo/#available-modules","title":"Available modules","text":"

The overview below shows which libjpeg-turbo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libjpeg-turbo, load one of these modules using a module load command like:

module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libogg/","title":"libogg","text":"

Ogg is a multimedia container format, and the native file and stream format for the Xiph.orgmultimedia codecs.

https://xiph.org/ogg/

"},{"location":"available_software/detail/libogg/#available-modules","title":"Available modules","text":"

The overview below shows which libogg installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libogg, load one of these modules using a module load command like:

module load libogg/1.3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libogg/1.3.5-GCCcore-13.2.0 x x x x x x x x x libogg/1.3.5-GCCcore-12.3.0 x x x x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libopus/","title":"libopus","text":"

Opus is a totally open, royalty-free, highly versatile audio codec. Opus is unmatched for interactive speech and music transmission over the Internet, but is also intended for storage and streaming applications. It is standardized by the Internet Engineering Task Force (IETF) as RFC 6716 which incorporated technology from Skype\u2019s SILK codec and Xiph.Org\u2019s CELT codec.

https://www.opus-codec.org/

"},{"location":"available_software/detail/libopus/#available-modules","title":"Available modules","text":"

The overview below shows which libopus installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libopus, load one of these modules using a module load command like:

module load libopus/1.5.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libopus/1.5.2-GCCcore-13.2.0 x x x x x x x x x libopus/1.4-GCCcore-12.3.0 x x x x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libpciaccess/","title":"libpciaccess","text":"

Generic PCI access library.

https://cgit.freedesktop.org/xorg/lib/libpciaccess/

"},{"location":"available_software/detail/libpciaccess/#available-modules","title":"Available modules","text":"

The overview below shows which libpciaccess installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libpciaccess, load one of these modules using a module load command like:

module load libpciaccess/0.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpciaccess/0.17-GCCcore-13.2.0 x x x x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libpng/","title":"libpng","text":"

libpng is the official PNG reference library

http://www.libpng.org/pub/png/libpng.html

"},{"location":"available_software/detail/libpng/#available-modules","title":"Available modules","text":"

The overview below shows which libpng installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libpng, load one of these modules using a module load command like:

module load libpng/1.6.40-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpng/1.6.40-GCCcore-13.2.0 x x x x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/librosa/","title":"librosa","text":"

Audio and music processing in Python

https://librosa.org/

"},{"location":"available_software/detail/librosa/#available-modules","title":"Available modules","text":"

The overview below shows which librosa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using librosa, load one of these modules using a module load command like:

module load librosa/0.10.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 librosa/0.10.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/librosa/#librosa0101-foss-2023a","title":"librosa/0.10.1-foss-2023a","text":"

This is a list of extensions included in the module:

audioread-3.0.1, lazy_loader-0.3, librosa-0.10.1, resampy-0.4.3, soundfile-0.12.1, soxr-0.3.7

"},{"location":"available_software/detail/libsndfile/","title":"libsndfile","text":"

Libsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.

http://www.mega-nerd.com/libsndfile

"},{"location":"available_software/detail/libsndfile/#available-modules","title":"Available modules","text":"

The overview below shows which libsndfile installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libsndfile, load one of these modules using a module load command like:

module load libsndfile/1.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsndfile/1.2.2-GCCcore-13.2.0 x x x x x x x x x libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libsodium/","title":"libsodium","text":"

Sodium is a modern, easy-to-use software library for encryption, decryption, signatures, password hashing and more.

https://doc.libsodium.org/

"},{"location":"available_software/detail/libsodium/#available-modules","title":"Available modules","text":"

The overview below shows which libsodium installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libsodium, load one of these modules using a module load command like:

module load libsodium/1.0.19-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsodium/1.0.19-GCCcore-13.2.0 x x x x x x x x x libsodium/1.0.18-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libspatialindex/","title":"libspatialindex","text":"

C++ implementation of R*-tree, an MVR-tree and a TPR-tree with C API

https://libspatialindex.org

"},{"location":"available_software/detail/libspatialindex/#available-modules","title":"Available modules","text":"

The overview below shows which libspatialindex installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libspatialindex, load one of these modules using a module load command like:

module load libspatialindex/1.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libspatialindex/1.9.3-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libtirpc/","title":"libtirpc","text":"

Libtirpc is a port of Suns Transport-Independent RPC library to Linux.

https://sourceforge.net/projects/libtirpc/

"},{"location":"available_software/detail/libtirpc/#available-modules","title":"Available modules","text":"

The overview below shows which libtirpc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libtirpc, load one of these modules using a module load command like:

module load libtirpc/1.3.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libtirpc/1.3.4-GCCcore-13.2.0 x x x x x x x x x libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libunwind/","title":"libunwind","text":"

The primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications

https://www.nongnu.org/libunwind/

"},{"location":"available_software/detail/libunwind/#available-modules","title":"Available modules","text":"

The overview below shows which libunwind installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libunwind, load one of these modules using a module load command like:

module load libunwind/1.6.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libunwind/1.6.2-GCCcore-13.2.0 x x x x x x x x x libunwind/1.6.2-GCCcore-12.3.0 x x x x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libvorbis/","title":"libvorbis","text":"

Ogg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressedaudio format

https://xiph.org/vorbis/

"},{"location":"available_software/detail/libvorbis/#available-modules","title":"Available modules","text":"

The overview below shows which libvorbis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libvorbis, load one of these modules using a module load command like:

module load libvorbis/1.3.7-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libvorbis/1.3.7-GCCcore-13.2.0 x x x x x x x x x libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libvori/","title":"libvori","text":"

C++ library implementing the Voronoi integration as well as the compressed bqbfile format. The present version of libvori is a very early developmentversion, which is hard-coded to work with the CP2k program package.

https://brehm-research.de/libvori.php

"},{"location":"available_software/detail/libvori/#available-modules","title":"Available modules","text":"

The overview below shows which libvori installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libvori, load one of these modules using a module load command like:

module load libvori/220621-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libvori/220621-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libwebp/","title":"libwebp","text":"

WebP is a modern image format that provides superiorlossless and lossy compression for images on the web. Using WebP,webmasters and web developers can create smaller, richer images thatmake the web faster.

https://developers.google.com/speed/webp/

"},{"location":"available_software/detail/libwebp/#available-modules","title":"Available modules","text":"

The overview below shows which libwebp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libwebp, load one of these modules using a module load command like:

module load libwebp/1.3.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libwebp/1.3.2-GCCcore-13.2.0 x x x x x x x x x libwebp/1.3.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libxc/","title":"libxc","text":"

Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.

https://www.tddft.org/programs/libxc

"},{"location":"available_software/detail/libxc/#available-modules","title":"Available modules","text":"

The overview below shows which libxc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxc, load one of these modules using a module load command like:

module load libxc/6.2.2-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxc/6.2.2-GCC-12.3.0 x x x x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxml2-python/","title":"libxml2-python","text":"

Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform). This is the Python binding.

http://xmlsoft.org/

"},{"location":"available_software/detail/libxml2-python/#available-modules","title":"Available modules","text":"

The overview below shows which libxml2-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxml2-python, load one of these modules using a module load command like:

module load libxml2-python/2.11.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2-python/2.11.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libxml2/","title":"libxml2","text":"

Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).

http://xmlsoft.org/

"},{"location":"available_software/detail/libxml2/#available-modules","title":"Available modules","text":"

The overview below shows which libxml2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxml2, load one of these modules using a module load command like:

module load libxml2/2.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2/2.11.5-GCCcore-13.2.0 x x x x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxslt/","title":"libxslt","text":"

Libxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).

http://xmlsoft.org/

"},{"location":"available_software/detail/libxslt/#available-modules","title":"Available modules","text":"

The overview below shows which libxslt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxslt, load one of these modules using a module load command like:

module load libxslt/1.1.38-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxslt/1.1.38-GCCcore-13.2.0 x x x x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxsmm/","title":"libxsmm","text":"

LIBXSMM is a library for small dense and small sparse matrix-matrix multiplicationstargeting Intel Architecture (x86).

https://github.com/hfp/libxsmm

"},{"location":"available_software/detail/libxsmm/#available-modules","title":"Available modules","text":"

The overview below shows which libxsmm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxsmm, load one of these modules using a module load command like:

module load libxsmm/1.17-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxsmm/1.17-GCC-12.3.0 - - - x x x x x x"},{"location":"available_software/detail/libyaml/","title":"libyaml","text":"

LibYAML is a YAML parser and emitter written in C.

https://pyyaml.org/wiki/LibYAML

"},{"location":"available_software/detail/libyaml/#available-modules","title":"Available modules","text":"

The overview below shows which libyaml installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libyaml, load one of these modules using a module load command like:

module load libyaml/0.2.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libyaml/0.2.5-GCCcore-13.2.0 x x x x x x x x x libyaml/0.2.5-GCCcore-12.3.0 x x x x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/lpsolve/","title":"lpsolve","text":"

Mixed Integer Linear Programming (MILP) solver

https://sourceforge.net/projects/lpsolve/

"},{"location":"available_software/detail/lpsolve/#available-modules","title":"Available modules","text":"

The overview below shows which lpsolve installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lpsolve, load one of these modules using a module load command like:

module load lpsolve/5.5.2.11-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lpsolve/5.5.2.11-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/lxml/","title":"lxml","text":"

The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.

https://lxml.de/

"},{"location":"available_software/detail/lxml/#available-modules","title":"Available modules","text":"

The overview below shows which lxml installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lxml, load one of these modules using a module load command like:

module load lxml/4.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lxml/4.9.3-GCCcore-13.2.0 x x x x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/lz4/","title":"lz4","text":"

LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.

https://lz4.github.io/lz4/

"},{"location":"available_software/detail/lz4/#available-modules","title":"Available modules","text":"

The overview below shows which lz4 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lz4, load one of these modules using a module load command like:

module load lz4/1.9.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lz4/1.9.4-GCCcore-13.2.0 x x x x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/make/","title":"make","text":"

GNU version of make utility

https://www.gnu.org/software/make/make.html

"},{"location":"available_software/detail/make/#available-modules","title":"Available modules","text":"

The overview below shows which make installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using make, load one of these modules using a module load command like:

module load make/4.4.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 make/4.4.1-GCCcore-13.2.0 x x x x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x x x x make/4.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/mallard-ducktype/","title":"mallard-ducktype","text":"

Parser for the lightweight Ducktype syntax for Mallard

https://github.com/projectmallard/mallard-ducktype

"},{"location":"available_software/detail/mallard-ducktype/#available-modules","title":"Available modules","text":"

The overview below shows which mallard-ducktype installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mallard-ducktype, load one of these modules using a module load command like:

module load mallard-ducktype/1.0.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mallard-ducktype/1.0.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/matplotlib/","title":"matplotlib","text":"

matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.

https://matplotlib.org

"},{"location":"available_software/detail/matplotlib/#available-modules","title":"Available modules","text":"

The overview below shows which matplotlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using matplotlib, load one of these modules using a module load command like:

module load matplotlib/3.8.2-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 matplotlib/3.8.2-gfbf-2023b x x x x x x x x x matplotlib/3.7.2-gfbf-2023a x x x x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/matplotlib/#matplotlib382-gfbf-2023b","title":"matplotlib/3.8.2-gfbf-2023b","text":"

This is a list of extensions included in the module:

contourpy-1.2.0, Cycler-0.12.1, fonttools-4.47.0, kiwisolver-1.4.5, matplotlib-3.8.2

"},{"location":"available_software/detail/matplotlib/#matplotlib372-gfbf-2023a","title":"matplotlib/3.7.2-gfbf-2023a","text":"

This is a list of extensions included in the module:

contourpy-1.1.0, Cycler-0.11.0, fonttools-4.42.0, kiwisolver-1.4.4, matplotlib-3.7.2

"},{"location":"available_software/detail/matplotlib/#matplotlib370-gfbf-2022b","title":"matplotlib/3.7.0-gfbf-2022b","text":"

This is a list of extensions included in the module:

contourpy-1.0.7, Cycler-0.11.0, fonttools-4.38.0, kiwisolver-1.4.4, matplotlib-3.7.0

"},{"location":"available_software/detail/maturin/","title":"maturin","text":"

This project is meant as a zero configurationreplacement for setuptools-rust and milksnake. It supports buildingwheels for python 3.5+ on windows, linux, mac and freebsd, can uploadthem to pypi and has basic pypy and graalpy support.

https://github.com/pyo3/maturin

"},{"location":"available_software/detail/maturin/#available-modules","title":"Available modules","text":"

The overview below shows which maturin installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using maturin, load one of these modules using a module load command like:

module load maturin/1.5.0-GCCcore-13.2.0-Rust-1.76.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 maturin/1.5.0-GCCcore-13.2.0-Rust-1.76.0 x x x x x x x x x maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/meson-python/","title":"meson-python","text":"

Python build backend (PEP 517) for Meson projects

https://github.com/mesonbuild/meson-python

"},{"location":"available_software/detail/meson-python/#available-modules","title":"Available modules","text":"

The overview below shows which meson-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using meson-python, load one of these modules using a module load command like:

module load meson-python/0.15.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 meson-python/0.15.0-GCCcore-13.2.0 x x x x x x x x x meson-python/0.15.0-GCCcore-12.3.0 x x x x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x x x x meson-python/0.11.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1320","title":"meson-python/0.15.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

meson-python-0.15.0, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1230","title":"meson-python/0.15.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

meson-python-0.15.0, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0132-gcccore-1230","title":"meson-python/0.13.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

meson-python-0.13.2, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0110-gcccore-1220","title":"meson-python/0.11.0-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

meson-python-0.11.0, pyproject-metadata-0.6.1

"},{"location":"available_software/detail/mpi4py/","title":"mpi4py","text":"

MPI for Python (mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.

https://github.com/mpi4py/mpi4py

"},{"location":"available_software/detail/mpi4py/#available-modules","title":"Available modules","text":"

The overview below shows which mpi4py installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mpi4py, load one of these modules using a module load command like:

module load mpi4py/3.1.5-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpi4py/3.1.5-gompi-2023b x x x x x x x x x mpi4py/3.1.4-gompi-2023a x x x x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/mpi4py/#mpi4py315-gompi-2023b","title":"mpi4py/3.1.5-gompi-2023b","text":"

This is a list of extensions included in the module:

mpi4py-3.1.5

"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2023a","title":"mpi4py/3.1.4-gompi-2023a","text":"

This is a list of extensions included in the module:

mpi4py-3.1.4

"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2022b","title":"mpi4py/3.1.4-gompi-2022b","text":"

This is a list of extensions included in the module:

mpi4py-3.1.4

"},{"location":"available_software/detail/mpl-ascii/","title":"mpl-ascii","text":"

A matplotlib backend that produces plots using only ASCII characters

https://github.com/chriscave/mpl_ascii

"},{"location":"available_software/detail/mpl-ascii/#available-modules","title":"Available modules","text":"

The overview below shows which mpl-ascii installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mpl-ascii, load one of these modules using a module load command like:

module load mpl-ascii/0.10.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpl-ascii/0.10.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/mpl-ascii/#mpl-ascii0100-gfbf-2023a","title":"mpl-ascii/0.10.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

mpl-ascii-0.10.0

"},{"location":"available_software/detail/multiprocess/","title":"multiprocess","text":"

better multiprocessing and multithreading in python

https://github.com/uqfoundation/multiprocess

"},{"location":"available_software/detail/multiprocess/#available-modules","title":"Available modules","text":"

The overview below shows which multiprocess installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using multiprocess, load one of these modules using a module load command like:

module load multiprocess/0.70.16-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 multiprocess/0.70.16-gfbf-2023b x x x x x x x x x"},{"location":"available_software/detail/ncbi-vdb/","title":"ncbi-vdb","text":"

The SRA Toolkit and SDK from NCBI is a collection of tools and libraries for using data in the INSDC Sequence Read Archives.

https://github.com/ncbi/ncbi-vdb

"},{"location":"available_software/detail/ncbi-vdb/#available-modules","title":"Available modules","text":"

The overview below shows which ncbi-vdb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ncbi-vdb, load one of these modules using a module load command like:

module load ncbi-vdb/3.0.10-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ncbi-vdb/3.0.10-gompi-2023a x x x x x x x x x ncbi-vdb/3.0.5-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/ncdu/","title":"ncdu","text":"

Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don't have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.

https://dev.yorhel.nl/ncdu

"},{"location":"available_software/detail/ncdu/#available-modules","title":"Available modules","text":"

The overview below shows which ncdu installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ncdu, load one of these modules using a module load command like:

module load ncdu/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ncdu/1.18-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/netCDF-Fortran/","title":"netCDF-Fortran","text":"

NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

https://www.unidata.ucar.edu/software/netcdf/

"},{"location":"available_software/detail/netCDF-Fortran/#available-modules","title":"Available modules","text":"

The overview below shows which netCDF-Fortran installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netCDF-Fortran, load one of these modules using a module load command like:

module load netCDF-Fortran/4.6.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF-Fortran/4.6.1-gompi-2023a x x x x x x x x x netCDF-Fortran/4.6.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/netCDF/","title":"netCDF","text":"

NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

https://www.unidata.ucar.edu/software/netcdf/

"},{"location":"available_software/detail/netCDF/#available-modules","title":"Available modules","text":"

The overview below shows which netCDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netCDF, load one of these modules using a module load command like:

module load netCDF/4.9.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF/4.9.2-gompi-2023b x x x x x x x x x netCDF/4.9.2-gompi-2023a x x x x x x x x x netCDF/4.9.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/netcdf4-python/","title":"netcdf4-python","text":"

Python/numpy interface to netCDF.

https://unidata.github.io/netcdf4-python/

"},{"location":"available_software/detail/netcdf4-python/#available-modules","title":"Available modules","text":"

The overview below shows which netcdf4-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netcdf4-python, load one of these modules using a module load command like:

module load netcdf4-python/1.6.4-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netcdf4-python/1.6.4-foss-2023a x x x x x x x x x netcdf4-python/1.6.3-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/netcdf4-python/#netcdf4-python164-foss-2023a","title":"netcdf4-python/1.6.4-foss-2023a","text":"

This is a list of extensions included in the module:

cftime-1.6.2, netcdf4-python-1.6.4

"},{"location":"available_software/detail/netcdf4-python/#netcdf4-python163-foss-2022b","title":"netcdf4-python/1.6.3-foss-2022b","text":"

This is a list of extensions included in the module:

cftime-1.6.2, netcdf4-python-1.6.3

"},{"location":"available_software/detail/nettle/","title":"nettle","text":"

Nettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.

https://www.lysator.liu.se/~nisse/nettle/

"},{"location":"available_software/detail/nettle/#available-modules","title":"Available modules","text":"

The overview below shows which nettle installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nettle, load one of these modules using a module load command like:

module load nettle/3.9.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nettle/3.9.1-GCCcore-13.2.0 x x x x x x x x x nettle/3.9.1-GCCcore-12.3.0 x x x x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/networkx/","title":"networkx","text":"

NetworkX is a Python package for the creation, manipulation,and study of the structure, dynamics, and functions of complex networks.

https://pypi.python.org/pypi/networkx

"},{"location":"available_software/detail/networkx/#available-modules","title":"Available modules","text":"

The overview below shows which networkx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using networkx, load one of these modules using a module load command like:

module load networkx/3.2.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 networkx/3.2.1-gfbf-2023b x x x x x x x x x networkx/3.1-gfbf-2023a x x x x x x x x x networkx/3.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/nlohmann_json/","title":"nlohmann_json","text":"

JSON for Modern C++

https://github.com/nlohmann/json

"},{"location":"available_software/detail/nlohmann_json/#available-modules","title":"Available modules","text":"

The overview below shows which nlohmann_json installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nlohmann_json, load one of these modules using a module load command like:

module load nlohmann_json/3.11.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nlohmann_json/3.11.3-GCCcore-13.2.0 x x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/nodejs/","title":"nodejs","text":"

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

https://nodejs.org

"},{"location":"available_software/detail/nodejs/#available-modules","title":"Available modules","text":"

The overview below shows which nodejs installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nodejs, load one of these modules using a module load command like:

module load nodejs/20.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nodejs/20.9.0-GCCcore-13.2.0 x x x x x x x x x nodejs/18.17.1-GCCcore-12.3.0 x x x x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/nsync/","title":"nsync","text":"

nsync is a C library that exports various synchronization primitives, such as mutexes

https://github.com/google/nsync

"},{"location":"available_software/detail/nsync/#available-modules","title":"Available modules","text":"

The overview below shows which nsync installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nsync, load one of these modules using a module load command like:

module load nsync/1.26.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nsync/1.26.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/numactl/","title":"numactl","text":"

The numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.

https://github.com/numactl/numactl

"},{"location":"available_software/detail/numactl/#available-modules","title":"Available modules","text":"

The overview below shows which numactl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using numactl, load one of these modules using a module load command like:

module load numactl/2.0.16-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 numactl/2.0.16-GCCcore-13.2.0 x x x x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/numba/","title":"numba","text":"

Numba is an Open Source NumPy-aware optimizing compiler forPython sponsored by Continuum Analytics, Inc. It uses the remarkable LLVMcompiler infrastructure to compile Python syntax to machine code.

https://numba.pydata.org/

"},{"location":"available_software/detail/numba/#available-modules","title":"Available modules","text":"

The overview below shows which numba installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using numba, load one of these modules using a module load command like:

module load numba/0.58.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 numba/0.58.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/numba/#numba0581-foss-2023a","title":"numba/0.58.1-foss-2023a","text":"

This is a list of extensions included in the module:

llvmlite-0.41.1, numba-0.58.1

"},{"location":"available_software/detail/occt/","title":"occt","text":"

Open CASCADE Technology (OCCT) is an object-oriented C++class library designed for rapid production of sophisticated domain-specificCAD/CAM/CAE applications.

https://www.opencascade.com/

"},{"location":"available_software/detail/occt/#available-modules","title":"Available modules","text":"

The overview below shows which occt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using occt, load one of these modules using a module load command like:

module load occt/7.8.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 occt/7.8.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/orjson/","title":"orjson","text":"

Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy.

https://github.com/ijl/orjson

"},{"location":"available_software/detail/orjson/#available-modules","title":"Available modules","text":"

The overview below shows which orjson installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using orjson, load one of these modules using a module load command like:

module load orjson/3.9.15-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 orjson/3.9.15-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/orjson/#orjson3915-gcccore-1230","title":"orjson/3.9.15-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

mypy-1.10.0, mypy_extensions-1.0.0, orjson-3.9.15, ruff-0.4.8

"},{"location":"available_software/detail/parallel/","title":"parallel","text":"

parallel: Build and execute shell commands in parallel

https://savannah.gnu.org/projects/parallel/

"},{"location":"available_software/detail/parallel/#available-modules","title":"Available modules","text":"

The overview below shows which parallel installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using parallel, load one of these modules using a module load command like:

module load parallel/20230722-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 parallel/20230722-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/patchelf/","title":"patchelf","text":"

PatchELF is a small utility to modify the dynamic linker and RPATH of ELF executables.

https://github.com/NixOS/patchelf

"},{"location":"available_software/detail/patchelf/#available-modules","title":"Available modules","text":"

The overview below shows which patchelf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using patchelf, load one of these modules using a module load command like:

module load patchelf/0.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 patchelf/0.18.0-GCCcore-13.2.0 x x x x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pixman/","title":"pixman","text":"

Pixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.

http://www.pixman.org/

"},{"location":"available_software/detail/pixman/#available-modules","title":"Available modules","text":"

The overview below shows which pixman installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pixman, load one of these modules using a module load command like:

module load pixman/0.42.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pixman/0.42.2-GCCcore-13.2.0 x x x x x x x x x pixman/0.42.2-GCCcore-12.3.0 x x x x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pkgconf/","title":"pkgconf","text":"

pkgconf is a program which helps to configure compiler and linker flags for development libraries. It is similar to pkg-config from freedesktop.org.

https://github.com/pkgconf/pkgconf

"},{"location":"available_software/detail/pkgconf/#available-modules","title":"Available modules","text":"

The overview below shows which pkgconf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pkgconf, load one of these modules using a module load command like:

module load pkgconf/2.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x - x x pkgconf/1.8.0 x x x x x x x x x"},{"location":"available_software/detail/pkgconfig/","title":"pkgconfig","text":"

pkgconfig is a Python module to interface with the pkg-config command line tool

https://github.com/matze/pkgconfig

"},{"location":"available_software/detail/pkgconfig/#available-modules","title":"Available modules","text":"

The overview below shows which pkgconfig installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pkgconfig, load one of these modules using a module load command like:

module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x - x x"},{"location":"available_software/detail/poetry/","title":"poetry","text":"

Python packaging and dependency management made easy. Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.

https://python-poetry.org

"},{"location":"available_software/detail/poetry/#available-modules","title":"Available modules","text":"

The overview below shows which poetry installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using poetry, load one of these modules using a module load command like:

module load poetry/1.6.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 poetry/1.6.1-GCCcore-13.2.0 x x x x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/poetry/#poetry161-gcccore-1320","title":"poetry/1.6.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

attrs-23.1.0, build-0.10.0, cachecontrol-0.13.1, certifi-2023.7.22, charset-normalizer-3.3.1, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.6, html5lib-1.1, idna-3.4, importlib_metadata-6.8.0, installer-0.7.0, jaraco.classes-3.3.0, jeepney-0.8.0, jsonschema-4.17.3, keyring-24.2.0, lockfile-0.12.2, more-itertools-10.1.0, msgpack-1.0.7, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, poetry-1.6.1, poetry_core-1.7.0, poetry_plugin_export-1.5.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.20.0, rapidfuzz-2.15.2, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.4, six-1.16.0, tomlkit-0.12.1, urllib3-2.0.7, webencodings-0.5.1, zipp-3.17.0

"},{"location":"available_software/detail/poetry/#poetry151-gcccore-1230","title":"poetry/1.5.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

attrs-23.1.0, build-0.10.0, CacheControl-0.12.14, certifi-2023.5.7, charset-normalizer-3.1.0, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.5, html5lib-1.1, idna-3.4, importlib_metadata-6.7.0, installer-0.7.0, jaraco.classes-3.2.3, jeepney-0.8.0, jsonschema-4.17.3, keyring-23.13.1, lockfile-0.12.2, more-itertools-9.1.0, msgpack-1.0.5, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, poetry-1.5.1, poetry_core-1.6.1, poetry_plugin_export-1.4.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.19.3, rapidfuzz-2.15.1, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.0, six-1.16.0, tomlkit-0.11.8, urllib3-1.26.16, webencodings-0.5.1, zipp-3.15.0

"},{"location":"available_software/detail/protobuf-python/","title":"protobuf-python","text":"

Python Protocol Buffers runtime library.

https://github.com/google/protobuf/

"},{"location":"available_software/detail/protobuf-python/#available-modules","title":"Available modules","text":"

The overview below shows which protobuf-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using protobuf-python, load one of these modules using a module load command like:

module load protobuf-python/4.24.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/protobuf/","title":"protobuf","text":"

Protocol Buffers (a.k.a., protobuf) are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data.

https://github.com/protocolbuffers/protobuf

"},{"location":"available_software/detail/protobuf/#available-modules","title":"Available modules","text":"

The overview below shows which protobuf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using protobuf, load one of these modules using a module load command like:

module load protobuf/24.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf/24.0-GCCcore-12.3.0 x x x x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/psycopg2/","title":"psycopg2","text":"

Psycopg is the most popular PostgreSQL adapter for the Python programming language.

https://psycopg.org/

"},{"location":"available_software/detail/psycopg2/#available-modules","title":"Available modules","text":"

The overview below shows which psycopg2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using psycopg2, load one of these modules using a module load command like:

module load psycopg2/2.9.9-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 psycopg2/2.9.9-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/psycopg2/#psycopg2299-gcccore-1230","title":"psycopg2/2.9.9-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

psycopg2-2.9.9

"},{"location":"available_software/detail/pyMBE/","title":"pyMBE","text":"

pyMBE: the Python-based Molecule Builder for ESPResSopyMBE provides tools to facilitate building up molecules with complex architecturesin the Molecular Dynamics software ESPResSo.

"},{"location":"available_software/detail/pyMBE/#available-modules","title":"Available modules","text":"

The overview below shows which pyMBE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyMBE, load one of these modules using a module load command like:

module load pyMBE/0.8.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyMBE/0.8.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/pyMBE/#pymbe080-foss-2023b","title":"pyMBE/0.8.0-foss-2023b","text":"

This is a list of extensions included in the module:

biopandas-0.5.1.dev0, looseversion-1.1.2, mmtf-python-1.1.3, Pint-Pandas-0.5, pyMBE-0.8.0

"},{"location":"available_software/detail/pybind11/","title":"pybind11","text":"

pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.

https://pybind11.readthedocs.io

"},{"location":"available_software/detail/pybind11/#available-modules","title":"Available modules","text":"

The overview below shows which pybind11 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pybind11, load one of these modules using a module load command like:

module load pybind11/2.11.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pybind11/2.11.1-GCCcore-13.2.0 x x x x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pydantic/","title":"pydantic","text":"

Data validation and settings management using Python type hinting.

https://github.com/samuelcolvin/pydantic

"},{"location":"available_software/detail/pydantic/#available-modules","title":"Available modules","text":"

The overview below shows which pydantic installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pydantic, load one of these modules using a module load command like:

module load pydantic/2.7.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pydantic/2.7.4-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/pydantic/#pydantic274-gcccore-1320","title":"pydantic/2.7.4-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

annotated_types-0.6.0, pydantic-2.7.4, pydantic_core-2.18.4

"},{"location":"available_software/detail/pyfaidx/","title":"pyfaidx","text":"

pyfaidx: efficient pythonic random access to fasta subsequences

https://pypi.python.org/pypi/pyfaidx

"},{"location":"available_software/detail/pyfaidx/#available-modules","title":"Available modules","text":"

The overview below shows which pyfaidx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyfaidx, load one of these modules using a module load command like:

module load pyfaidx/0.8.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyfaidx/0.8.1.1-GCCcore-13.2.0 x x x x x x x x x pyfaidx/0.8.1.1-GCCcore-12.3.0 x x x x x x x x x pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pyfaidx/#pyfaidx0811-gcccore-1230","title":"pyfaidx/0.8.1.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

importlib_metadata-7.0.1, pyfaidx-0.8.1.1, zipp-3.17.0

"},{"location":"available_software/detail/pyproj/","title":"pyproj","text":"

Python interface to PROJ4 library for cartographic transformations

https://pyproj4.github.io/pyproj

"},{"location":"available_software/detail/pyproj/#available-modules","title":"Available modules","text":"

The overview below shows which pyproj installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyproj, load one of these modules using a module load command like:

module load pyproj/3.6.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyproj/3.6.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pystencils/","title":"pystencils","text":"

pystencils uses sympy to define stencil operations, that can be executed on numpy arrays

https://pycodegen.pages.i10git.cs.fau.de/pystencils

"},{"location":"available_software/detail/pystencils/#available-modules","title":"Available modules","text":"

The overview below shows which pystencils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pystencils, load one of these modules using a module load command like:

module load pystencils/1.3.4-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pystencils/1.3.4-gfbf-2023b x x x x x x x x x"},{"location":"available_software/detail/pystencils/#pystencils134-gfbf-2023b","title":"pystencils/1.3.4-gfbf-2023b","text":"

This is a list of extensions included in the module:

pystencils-1.3.4

"},{"location":"available_software/detail/pytest-flakefinder/","title":"pytest-flakefinder","text":"

Runs tests multiple times to expose flakiness.

https://github.com/dropbox/pytest-flakefinder

"},{"location":"available_software/detail/pytest-flakefinder/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-flakefinder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-flakefinder, load one of these modules using a module load command like:

module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pytest-rerunfailures/","title":"pytest-rerunfailures","text":"

pytest plugin to re-run tests to eliminate flaky failures.

https://github.com/pytest-dev/pytest-rerunfailures

"},{"location":"available_software/detail/pytest-rerunfailures/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-rerunfailures installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-rerunfailures, load one of these modules using a module load command like:

module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pytest-shard/","title":"pytest-shard","text":"

pytest plugin to support parallelism across multiple machines.Shards tests based on a hash of their test name enabling easy parallelism across machines,suitable for a wide variety of continuous integration services.Tests are split at the finest level of granularity, individual test cases,enabling parallelism even if all of your tests are in a single file(or even single parameterized test method).

https://github.com/AdamGleave/pytest-shard

"},{"location":"available_software/detail/pytest-shard/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-shard installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-shard, load one of these modules using a module load command like:

module load pytest-shard/0.1.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/python-casacore/","title":"python-casacore","text":"

Python-casacore is a set of Python bindings for casacore,a c++ library used in radio astronomy. Python-casacore replaces the old pyrap.

https://casacore.github.io/python-casacore/#

"},{"location":"available_software/detail/python-casacore/#available-modules","title":"Available modules","text":"

The overview below shows which python-casacore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-casacore, load one of these modules using a module load command like:

module load python-casacore/3.5.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-casacore/3.5.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/python-casacore/#python-casacore352-foss-2023b","title":"python-casacore/3.5.2-foss-2023b","text":"

This is a list of extensions included in the module:

python-casacore-3.5.2, setuptools-69.1.0

"},{"location":"available_software/detail/python-isal/","title":"python-isal","text":"

Faster zlib and gzip compatible compression and decompression by providing python bindings for the isa-l library.

https://github.com/pycompression/python-isal

"},{"location":"available_software/detail/python-isal/#available-modules","title":"Available modules","text":"

The overview below shows which python-isal installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-isal, load one of these modules using a module load command like:

module load python-isal/1.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-isal/1.1.0-GCCcore-12.3.0 x x x x x x x x x python-isal/1.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/python-xxhash/","title":"python-xxhash","text":"

xxhash is a Python binding for the xxHash library by Yann Collet.

https://github.com/ifduyue/python-xxhash

"},{"location":"available_software/detail/python-xxhash/#available-modules","title":"Available modules","text":"

The overview below shows which python-xxhash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-xxhash, load one of these modules using a module load command like:

module load python-xxhash/3.4.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-xxhash/3.4.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/python-xxhash/#python-xxhash341-gcccore-1230","title":"python-xxhash/3.4.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

xxhash-3.4.1

"},{"location":"available_software/detail/re2c/","title":"re2c","text":"

re2c is a free and open-source lexer generator for C and C++. Its main goal is generatingfast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of usingtraditional table-driven approach, re2c encodes the generated finite state automata directly in the formof conditional jumps and comparisons.

https://re2c.org

"},{"location":"available_software/detail/re2c/#available-modules","title":"Available modules","text":"

The overview below shows which re2c installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using re2c, load one of these modules using a module load command like:

module load re2c/3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 re2c/3.1-GCCcore-13.2.0 x x x x x x x x x re2c/3.1-GCCcore-12.3.0 x x x x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/rpy2/","title":"rpy2","text":"

rpy2 is an interface to R running embedded in a Python process.

https://rpy2.github.io

"},{"location":"available_software/detail/rpy2/#available-modules","title":"Available modules","text":"

The overview below shows which rpy2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using rpy2, load one of these modules using a module load command like:

module load rpy2/3.5.15-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 rpy2/3.5.15-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/rpy2/#rpy23515-foss-2023a","title":"rpy2/3.5.15-foss-2023a","text":"

This is a list of extensions included in the module:

coverage-7.4.3, pytest-cov-4.1.0, rpy2-3.5.15, tzlocal-5.2

"},{"location":"available_software/detail/scikit-build-core/","title":"scikit-build-core","text":"

Scikit-build-core is a complete ground-up rewrite of scikit-build on top ofmodern packaging APIs. It provides a bridge between CMake and the Python buildsystem, allowing you to make Python modules with CMake.

https://scikit-build.readthedocs.io/en/latest/

"},{"location":"available_software/detail/scikit-build-core/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-build-core installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-build-core, load one of these modules using a module load command like:

module load scikit-build-core/0.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build-core/0.9.3-GCCcore-13.2.0 x x x x x x x x x scikit-build-core/0.9.3-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/scikit-build-core/#scikit-build-core093-gcccore-1320","title":"scikit-build-core/0.9.3-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

scikit_build_core-0.9.3

"},{"location":"available_software/detail/scikit-build-core/#scikit-build-core093-gcccore-1230","title":"scikit-build-core/0.9.3-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

pyproject-metadata-0.8.0, scikit_build_core-0.9.3

"},{"location":"available_software/detail/scikit-build/","title":"scikit-build","text":"

Scikit-Build, or skbuild, is an improved build system generatorfor CPython C/C++/Fortran/Cython extensions.

https://scikit-build.readthedocs.io/en/latest

"},{"location":"available_software/detail/scikit-build/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-build installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-build, load one of these modules using a module load command like:

module load scikit-build/0.17.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1320","title":"scikit-build/0.17.6-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

distro-1.8.0, packaging-23.1, scikit_build-0.17.6

"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1230","title":"scikit-build/0.17.6-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

distro-1.8.0, packaging-23.1, scikit_build-0.17.6

"},{"location":"available_software/detail/scikit-learn/","title":"scikit-learn","text":"

Scikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world,building upon numpy, scipy, and matplotlib. As a machine-learning module,it provides versatile tools for data mining and analysis in any field of science and engineering.It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.

https://scikit-learn.org/stable/index.html

"},{"location":"available_software/detail/scikit-learn/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-learn installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-learn, load one of these modules using a module load command like:

module load scikit-learn/1.4.0-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-learn/1.4.0-gfbf-2023b x x x x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/scikit-learn/#scikit-learn140-gfbf-2023b","title":"scikit-learn/1.4.0-gfbf-2023b","text":"

This is a list of extensions included in the module:

scikit-learn-1.4.0, sklearn-0.0

"},{"location":"available_software/detail/scikit-learn/#scikit-learn131-gfbf-2023a","title":"scikit-learn/1.3.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

scikit-learn-1.3.1, sklearn-0.0

"},{"location":"available_software/detail/setuptools-rust/","title":"setuptools-rust","text":"

setuptools-rust is a plugin for setuptools to build Rust Python extensionsimplemented with PyO3 or rust-cpython.

https://github.com/PyO3/setuptools-rust

"},{"location":"available_software/detail/setuptools-rust/#available-modules","title":"Available modules","text":"

The overview below shows which setuptools-rust installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using setuptools-rust, load one of these modules using a module load command like:

module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust180-gcccore-1320","title":"setuptools-rust/1.8.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

semantic_version-2.10.0, setuptools-rust-1.8.0, typing_extensions-4.8.0

"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust160-gcccore-1230","title":"setuptools-rust/1.6.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

semantic_version-2.10.0, setuptools-rust-1.6.0, typing_extensions-4.6.3

"},{"location":"available_software/detail/setuptools/","title":"setuptools","text":"

Easily download, build, install, upgrade, and uninstall Python packages

https://pypi.org/project/setuptools

"},{"location":"available_software/detail/setuptools/#available-modules","title":"Available modules","text":"

The overview below shows which setuptools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using setuptools, load one of these modules using a module load command like:

module load setuptools/64.0.3-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools/64.0.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/siscone/","title":"siscone","text":"

Hadron Seedless Infrared-Safe Cone jet algorithm

https://siscone.hepforge.org/

"},{"location":"available_software/detail/siscone/#available-modules","title":"Available modules","text":"

The overview below shows which siscone installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using siscone, load one of these modules using a module load command like:

module load siscone/3.0.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 siscone/3.0.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/snakemake/","title":"snakemake","text":"

The Snakemake workflow management system is a tool to create reproducible and scalable data analyses.

https://snakemake.readthedocs.io

"},{"location":"available_software/detail/snakemake/#available-modules","title":"Available modules","text":"

The overview below shows which snakemake installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using snakemake, load one of these modules using a module load command like:

module load snakemake/8.4.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 snakemake/8.4.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/snakemake/#snakemake842-foss-2023a","title":"snakemake/8.4.2-foss-2023a","text":"

This is a list of extensions included in the module:

argparse-dataclass-2.0.0, conda-inject-1.3.1, ConfigArgParse-1.7, connection-pool-0.0.3, datrie-0.8.2, dpath-2.1.6, fastjsonschema-2.19.1, humanfriendly-10.0, immutables-0.20, jupyter-core-5.7.1, nbformat-5.9.2, plac-1.4.2, reretry-0.11.8, smart-open-6.4.0, snakemake-8.4.2, snakemake-executor-plugin-cluster-generic-1.0.7, snakemake-executor-plugin-cluster-sync-0.1.3, snakemake-executor-plugin-flux-0.1.0, snakemake-executor-plugin-slurm-0.2.1, snakemake-executor-plugin-slurm-jobstep-0.1.10, snakemake-interface-common-1.15.2, snakemake-interface-executor-plugins-8.2.0, snakemake-interface-storage-plugins-3.0.0, stopit-1.1.2, throttler-1.2.2, toposort-1.10, yte-1.5.4

"},{"location":"available_software/detail/snappy/","title":"snappy","text":"

Snappy is a compression/decompression library. It does not aimfor maximum compression, or compatibility with any other compression library;instead, it aims for very high speeds and reasonable compression.

https://github.com/google/snappy

"},{"location":"available_software/detail/snappy/#available-modules","title":"Available modules","text":"

The overview below shows which snappy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using snappy, load one of these modules using a module load command like:

module load snappy/1.1.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 snappy/1.1.10-GCCcore-13.2.0 x x x x x x x x x snappy/1.1.10-GCCcore-12.3.0 x x x x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/spglib-python/","title":"spglib-python","text":"

Spglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.

https://pypi.python.org/pypi/spglib

"},{"location":"available_software/detail/spglib-python/#available-modules","title":"Available modules","text":"

The overview below shows which spglib-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using spglib-python, load one of these modules using a module load command like:

module load spglib-python/2.0.2-gfbf-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 spglib-python/2.0.2-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/statsmodels/","title":"statsmodels","text":"

Statsmodels is a Python module that allows users to explore data, estimate statistical models,and perform statistical tests.

https://www.statsmodels.org/

"},{"location":"available_software/detail/statsmodels/#available-modules","title":"Available modules","text":"

The overview below shows which statsmodels installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using statsmodels, load one of these modules using a module load command like:

module load statsmodels/0.14.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 statsmodels/0.14.1-gfbf-2023b x x x x x x x x x statsmodels/0.14.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/statsmodels/#statsmodels0141-gfbf-2023b","title":"statsmodels/0.14.1-gfbf-2023b","text":"

This is a list of extensions included in the module:

patsy-0.5.6, statsmodels-0.14.1

"},{"location":"available_software/detail/statsmodels/#statsmodels0141-gfbf-2023a","title":"statsmodels/0.14.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

patsy-0.5.6, statsmodels-0.14.1

"},{"location":"available_software/detail/sympy/","title":"sympy","text":"

SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries.

https://sympy.org/

"},{"location":"available_software/detail/sympy/#available-modules","title":"Available modules","text":"

The overview below shows which sympy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using sympy, load one of these modules using a module load command like:

module load sympy/1.12-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 sympy/1.12-gfbf-2023b x x x x x x x x x sympy/1.12-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/tbb/","title":"tbb","text":"

Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.

https://github.com/oneapi-src/oneTBB

"},{"location":"available_software/detail/tbb/#available-modules","title":"Available modules","text":"

The overview below shows which tbb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tbb, load one of these modules using a module load command like:

module load tbb/2021.13.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tbb/2021.13.0-GCCcore-13.2.0 - - - x x x x x x tbb/2021.11.0-GCCcore-12.3.0 x x x x x x x x x tbb/2021.10.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/tcsh/","title":"tcsh","text":"

Tcsh is an enhanced, but completely compatible version of the Berkeley UNIX C shell (csh). It is a command language interpreter usable both as an interactive login shell and a shell script command processor. It includes a command-line editor, programmable word completion, spelling correction, a history mechanism, job control and a C-like syntax.

https://www.tcsh.org

"},{"location":"available_software/detail/tcsh/#available-modules","title":"Available modules","text":"

The overview below shows which tcsh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tcsh, load one of these modules using a module load command like:

module load tcsh/6.24.07-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tcsh/6.24.07-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/time/","title":"time","text":"

The `time' command runs another program, then displays information about the resources used by that program, collected by the system while the program was running.

https://www.gnu.org/software/time/

"},{"location":"available_software/detail/time/#available-modules","title":"Available modules","text":"

The overview below shows which time installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using time, load one of these modules using a module load command like:

module load time/1.9-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 time/1.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/tmux/","title":"tmux","text":"

tmux is a terminal multiplexer: it enables a number ofterminals to be created, accessed, and controlled from a single screen. tmuxmay be detached from a screen and continue running in the background, thenlater reattached.

https://github.com/tmux/tmux/

"},{"location":"available_software/detail/tmux/#available-modules","title":"Available modules","text":"

The overview below shows which tmux installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tmux, load one of these modules using a module load command like:

module load tmux/3.3a-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tmux/3.3a-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/tornado/","title":"tornado","text":"

Tornado is a Python web framework and asynchronous networking library.

https://github.com/tornadoweb/tornado

"},{"location":"available_software/detail/tornado/#available-modules","title":"Available modules","text":"

The overview below shows which tornado installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tornado, load one of these modules using a module load command like:

module load tornado/6.3.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tornado/6.3.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/tqdm/","title":"tqdm","text":"

A fast, extensible progress bar for Python and CLI

https://github.com/tqdm/tqdm

"},{"location":"available_software/detail/tqdm/#available-modules","title":"Available modules","text":"

The overview below shows which tqdm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tqdm, load one of these modules using a module load command like:

module load tqdm/4.66.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tqdm/4.66.2-GCCcore-13.2.0 x x x x x x x x x tqdm/4.66.1-GCCcore-12.3.0 x x x x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/typing-extensions/","title":"typing-extensions","text":"

Typing Extensions - Backported and Experimental Type Hints for Python

https://github.com/python/typing_extensions

"},{"location":"available_software/detail/typing-extensions/#available-modules","title":"Available modules","text":"

The overview below shows which typing-extensions installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using typing-extensions, load one of these modules using a module load command like:

module load typing-extensions/4.10.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 typing-extensions/4.10.0-GCCcore-13.2.0 x x x x x x x x x typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/unixODBC/","title":"unixODBC","text":"

unixODBC provides a uniform interface betweenapplication and database driver

https://www.unixodbc.org

"},{"location":"available_software/detail/unixODBC/#available-modules","title":"Available modules","text":"

The overview below shows which unixODBC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using unixODBC, load one of these modules using a module load command like:

module load unixODBC/2.3.12-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 unixODBC/2.3.12-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/utf8proc/","title":"utf8proc","text":"

utf8proc is a small, clean C library that provides Unicode normalization, case-folding, and other operations for data in the UTF-8 encoding.

https://github.com/JuliaStrings/utf8proc

"},{"location":"available_software/detail/utf8proc/#available-modules","title":"Available modules","text":"

The overview below shows which utf8proc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using utf8proc, load one of these modules using a module load command like:

module load utf8proc/2.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 utf8proc/2.9.0-GCCcore-13.2.0 x x x x x x x x x utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/virtualenv/","title":"virtualenv","text":"

A tool for creating isolated virtual python environments.

https://github.com/pypa/virtualenv

"},{"location":"available_software/detail/virtualenv/#available-modules","title":"Available modules","text":"

The overview below shows which virtualenv installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using virtualenv, load one of these modules using a module load command like:

module load virtualenv/20.24.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/virtualenv/#virtualenv20246-gcccore-1320","title":"virtualenv/20.24.6-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

distlib-0.3.7, filelock-3.13.0, platformdirs-3.11.0, virtualenv-20.24.6

"},{"location":"available_software/detail/virtualenv/#virtualenv20231-gcccore-1230","title":"virtualenv/20.23.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

distlib-0.3.6, filelock-3.12.2, platformdirs-3.8.0, virtualenv-20.23.1

"},{"location":"available_software/detail/waLBerla/","title":"waLBerla","text":"

Widely applicable Lattics-Boltzmann from Erlangen is a block-structured high-performance framework for multiphysics simulations

https://walberla.net/index.html

"},{"location":"available_software/detail/waLBerla/#available-modules","title":"Available modules","text":"

The overview below shows which waLBerla installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using waLBerla, load one of these modules using a module load command like:

module load waLBerla/6.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 waLBerla/6.1-foss-2023a x x x x x x x x x waLBerla/6.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/wget/","title":"wget","text":"

GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.

https://www.gnu.org/software/wget

"},{"location":"available_software/detail/wget/#available-modules","title":"Available modules","text":"

The overview below shows which wget installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wget, load one of these modules using a module load command like:

module load wget/1.24.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wget/1.24.5-GCCcore-12.3.0 x x x x x x x x x wget/1.21.4-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/wradlib/","title":"wradlib","text":"

The wradlib project has been initiated in order to facilitate the use of weatherradar data as well as to provide a common platform for research on newalgorithms.

https://docs.wradlib.org/

"},{"location":"available_software/detail/wradlib/#available-modules","title":"Available modules","text":"

The overview below shows which wradlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wradlib, load one of these modules using a module load command like:

module load wradlib/2.0.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wradlib/2.0.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/wradlib/#wradlib203-foss-2023a","title":"wradlib/2.0.3-foss-2023a","text":"

This is a list of extensions included in the module:

cmweather-0.3.2, deprecation-2.1.0, lat_lon_parser-1.3.0, wradlib-2.0.3, xarray-datatree-0.0.13, xmltodict-0.13.0, xradar-0.5.1

"},{"location":"available_software/detail/wrapt/","title":"wrapt","text":"

The aim of the wrapt module is to provide a transparent objectproxy for Python, which can be used as the basis for the construction offunction wrappers and decorator functions.

https://pypi.org/project/wrapt/

"},{"location":"available_software/detail/wrapt/#available-modules","title":"Available modules","text":"

The overview below shows which wrapt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wrapt, load one of these modules using a module load command like:

module load wrapt/1.15.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wrapt/1.15.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/wrapt/#wrapt1150-gfbf-2023a","title":"wrapt/1.15.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

wrapt-1.15.0

"},{"location":"available_software/detail/wxWidgets/","title":"wxWidgets","text":"

wxWidgets is a C++ library that lets developers createapplications for Windows, Mac OS X, Linux and other platforms with asingle code base. It has popular language bindings for Python, Perl,Ruby and many other languages, and unlike other cross-platform toolkits,wxWidgets gives applications a truly native look and feel because ituses the platform's native API rather than emulating the GUI.

https://www.wxwidgets.org

"},{"location":"available_software/detail/wxWidgets/#available-modules","title":"Available modules","text":"

The overview below shows which wxWidgets installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wxWidgets, load one of these modules using a module load command like:

module load wxWidgets/3.2.6-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wxWidgets/3.2.6-GCC-13.2.0 x x x x x x x x x wxWidgets/3.2.2.1-GCC-12.3.0 x x x x x x x x x wxWidgets/3.2.2.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/x264/","title":"x264","text":"

x264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.

https://www.videolan.org/developers/x264.html

"},{"location":"available_software/detail/x264/#available-modules","title":"Available modules","text":"

The overview below shows which x264 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using x264, load one of these modules using a module load command like:

module load x264/20231019-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 x264/20231019-GCCcore-13.2.0 x x x x x x x x x x264/20230226-GCCcore-12.3.0 x x x x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/x265/","title":"x265","text":"

x265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.

https://x265.org/

"},{"location":"available_software/detail/x265/#available-modules","title":"Available modules","text":"

The overview below shows which x265 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using x265, load one of these modules using a module load command like:

module load x265/3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 x265/3.5-GCCcore-13.2.0 x x x x x x x x x x265/3.5-GCCcore-12.3.0 x x x x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/xarray/","title":"xarray","text":"

xarray (formerly xray) is an open source project and Python package that aims to bring the labeled data power of pandas to the physical sciences, by providing N-dimensional variants of the core pandas data structures.

https://github.com/pydata/xarray

"},{"location":"available_software/detail/xarray/#available-modules","title":"Available modules","text":"

The overview below shows which xarray installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xarray, load one of these modules using a module load command like:

module load xarray/2023.9.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xarray/2023.9.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/xarray/#xarray202390-gfbf-2023a","title":"xarray/2023.9.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

xarray-2023.9.0

"},{"location":"available_software/detail/xorg-macros/","title":"xorg-macros","text":"

X.org macros utilities.

https://gitlab.freedesktop.org/xorg/util/macros

"},{"location":"available_software/detail/xorg-macros/#available-modules","title":"Available modules","text":"

The overview below shows which xorg-macros installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xorg-macros, load one of these modules using a module load command like:

module load xorg-macros/1.20.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/xprop/","title":"xprop","text":"

The xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.

https://www.x.org/wiki/

"},{"location":"available_software/detail/xprop/#available-modules","title":"Available modules","text":"

The overview below shows which xprop installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xprop, load one of these modules using a module load command like:

module load xprop/1.2.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xprop/1.2.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/xxHash/","title":"xxHash","text":"

xxHash is an extremely fast non-cryptographic hash algorithm, working at RAM speed limit.

https://cyan4973.github.io/xxHash

"},{"location":"available_software/detail/xxHash/#available-modules","title":"Available modules","text":"

The overview below shows which xxHash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xxHash, load one of these modules using a module load command like:

module load xxHash/0.8.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxHash/0.8.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/xxd/","title":"xxd","text":"

xxd is part of the VIM package and this will only install xxd, not vim!xxd converts to/from hexdumps of binary files.

https://www.vim.org

"},{"location":"available_software/detail/xxd/#available-modules","title":"Available modules","text":"

The overview below shows which xxd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xxd, load one of these modules using a module load command like:

module load xxd/9.1.0307-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxd/9.1.0307-GCCcore-13.2.0 x x x x x x x x x xxd/9.0.2112-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yell/","title":"yell","text":"

Yell - Your Extensible Logging Library is a comprehensive logging replacement for Ruby.

https://github.com/rudionrails/yell

"},{"location":"available_software/detail/yell/#available-modules","title":"Available modules","text":"

The overview below shows which yell installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yell, load one of these modules using a module load command like:

module load yell/2.2.2-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yell/2.2.2-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yelp-tools/","title":"yelp-tools","text":"

yelp-tools is a collection of scripts and build utilities to help create,manage, and publish documentation for Yelp and the web. Most of the heavylifting is done by packages like yelp-xsl and itstool. This package justwraps things up in a developer-friendly way.

https://gitlab.gnome.org/GNOME/yelp-tools

"},{"location":"available_software/detail/yelp-tools/#available-modules","title":"Available modules","text":"

The overview below shows which yelp-tools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yelp-tools, load one of these modules using a module load command like:

module load yelp-tools/42.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yelp-tools/42.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yelp-xsl/","title":"yelp-xsl","text":"

yelp-xsl is a collection of programs and data files to help you build, maintain, and distribute documentation. It provides XSLT stylesheets that can be built upon for help viewers and publishing systems. These stylesheets output JavaScript and CSS content, and reference images provided by yelp-xsl. This package also redistributes copies of the jQuery and jQuery.Syntax JavaScript libraries.

https://gitlab.gnome.org/GNOME/yelp-xslg

"},{"location":"available_software/detail/yelp-xsl/#available-modules","title":"Available modules","text":"

The overview below shows which yelp-xsl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yelp-xsl, load one of these modules using a module load command like:

module load yelp-xsl/42.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yelp-xsl/42.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/zstd/","title":"zstd","text":"

Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.

https://facebook.github.io/zstd

"},{"location":"available_software/detail/zstd/#available-modules","title":"Available modules","text":"

The overview below shows which zstd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using zstd, load one of these modules using a module load command like:

module load zstd/1.5.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 zstd/1.5.5-GCCcore-13.2.0 x x x x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"blog/","title":"Blog","text":""},{"location":"blog/2024/05/17/isc24/","title":"EESSI promo tour @ ISC'24 (May 2024, Hamburg)","text":"

This week, we had the privilege of attending the ISC'24 conference in the beautiful city of Hamburg, Germany. This was an excellent opportunity for us to showcase EESSI, and gain valuable insights and feedback from the HPC community.

"},{"location":"blog/2024/05/17/isc24/#bof-session-on-eessi","title":"BoF session on EESSI","text":"

The EESSI Birds-of-a-Feather (BoF) session on Tuesday morning, part of the official ISC'24 program, was the highlight of our activities in Hamburg.

It was well attended, with well over 100 people joining us at 9am.

During this session, we introduced the EESSI project with a short presentation, followed by a well-received live hands-on demo of installing and using EESSI by spinning up an \"empty\" Linux virtual machine instance in Amazon EC2 and getting optimized installations of popular scientific applications like GROMACS and TensorFlow running in a matter of minutes.

During the second part of the BoF session, we engaged with the audience through an interactive poll and by letting attendees ask questions.

The presentation slides, including the results of the interactive poll and questions that were raised by attendees, are available here.

"},{"location":"blog/2024/05/17/isc24/#workshops","title":"Workshops","text":"

During the last day of ISC'24, EESSI was present in no less than three different workshops.

"},{"location":"blog/2024/05/17/isc24/#risc-v-workshop","title":"RISC-V workshop","text":"

At the Fourth International workshop on RISC-V for HPC, Juli\u00e1n Morillo (BSC) presented our paper \"Preparing to Hit the Ground Running: Adding RISC-V support to EESSI\" (slides available here).

Juli\u00e1n covered the initial work that was done in the scope of the MultiXscale EuroHPC Centre-of-Excellence to add support for RISC-V to EESSI, outlined the challenges we encountered, and shared the lessons we have learned along the way.

"},{"location":"blog/2024/05/17/isc24/#ahug-workshop","title":"AHUG workshop","text":"

During the Arm HPC User Group (AHUG) workshop, Kenneth Hoste (HPC-UGent) gave a talk entitled \"Extending Arm\u2019s Reach by Going EESSI\" (slides available here).

Next to a high-level introduction to EESSI, we briefly covered some of the challenges we encountered when testing the optimized software installations that we had built for the Arm Neoverse V1 microarchitecture, including bugs in OpenMPI and GROMACS.

Kenneth gave a live demonstration of how to get access to EESSI and start running the optimized software installations we provide through our CernVM-FS repository on a fresh AWS Graviton 3 instance in a matter of minutes.

"},{"location":"blog/2024/05/17/isc24/#pop-workshop","title":"POP workshop","text":"

In the afternoon on Thursday, Lara Peeters (HPC-UGent) presented MultiXscale during the Readiness of HPC Extreme-scale Applications workshop, which was organised by the POP EuroHPC Centre-of-Excellence (slides available here).

Lara outlined the pilot use cases on which MultiXscale focuses, and explained how EESSI helps to achieve the goals of MultiXscale in terms of Productivity, Performance, and Portability.

At the end of the workshop, a group picture was taken with both organisers and speakers, which was a great way to wrap up a busy week in Hamburg!

"},{"location":"blog/2024/05/17/isc24/#talks-and-demos-on-eessi-at-exhibit","title":"Talks and demos on EESSI at exhibit","text":"

Not only was EESSI part of the official ISC'24 program via a dedicated BoF session and various workshops: we were also prominently present on the exhibit floor.

"},{"location":"blog/2024/05/17/isc24/#microsoft-azure-booth","title":"Microsoft Azure booth","text":"

Microsoft Azure invited us to give a 1-hour introductory presentation on EESSI on both Monday and Wednesday at their booth during the ISC'24 exhibit, as well as to provide live demonstrations at the demo corner of their booth on Tuesday afternoon on how to get access to EESSI and the user experience it provides.

Exhibit attendees were welcome to pass by and ask questions, and did so throughout the full 4 hours we were present there.

Both Microsoft Azure and AWS have been graciously providing resources in their cloud infrastructure free-of-cost for developing, testing, and demonstrating EESSI for several years now.

"},{"location":"blog/2024/05/17/isc24/#eurohpc-booth","title":"EuroHPC booth","text":"

The MultiXscale EuroHPC Centre-of-Excellence we are actively involved in, and through which the development of EESSI is being co-funded since Jan'23, was invited by the EuroHPC JU to present the goals and preliminary achievements at their booth.

Elisabeth Ortega (HPCNow!) did the honours to give the last talk at the EuroHPC JU booth of the ISC'24 exhibit.

"},{"location":"blog/2024/05/17/isc24/#stickers","title":"Stickers!","text":"

Last but not least: we handed out a boatload free stickers with the logo of both MultiXscale and EESSI itself, as well as of various of the open source software projects we leverage, including EasyBuild, Lmod, and CernVM-FS.

We have mostly exhausted our sticker collection during ISC'24, but don't worry: we will make sure we have more available at upcoming events...

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/","title":"Portable test run of ESPResSo on EuroHPC systems via EESSI","text":"

Since 14 June 2024, ESPResSo v4.2.2 is available in the EESSI production repository software.eessi.io, optimized for the 8 CPU targets that are fully supported by version 2023.06 of EESSI. This allows running ESPResSo effortlessly on the EuroHPC systems where EESSI is already available, like Vega and Karolina.

On 27 June 2024, an additional installation of ESPResSo v4.2.2 that is optimized for Arm A64FX processors was added, which enables also running ESPResSo efficiently on Deucalion, even though EESSI is not available yet system-wide on Deucalion (see below for more details).

With the portable test for ESPResSo that is available in the EESSI test suite we can easily evaluate the scalability of ESPResSo across EuroHPC systems, even if those systems have different system architectures.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#simulating-lennard-jones-fluids-using-espresso","title":"Simulating Lennard-Jones fluids using ESPResSo","text":"

Lennard-Jones fluids model interacting soft spheres with a potential that is weakly attractive at medium range and strongly repulsive at short range. Originally designed to model noble gases, this simple setup now underpins most particle-based simulations, such as ionic liquids, polymers, proteins and colloids, where strongly repulsive pairwise potentials are desirable to prevent particles from overlapping with one another. In addition, solvated systems with atomistic resolution typically have a large excess of solvent atoms compared to solute atoms, thus Lennard-Jones interactions tend to account for a large portion of the simulation time. Compared to other potentials, the Lennard-Jones interaction is inexpensive to calculate, and its limited range allows us to partition the simulation domain into arbitrarily small regions that can be distributed among many processors.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#portable-test-to-evaluate-performance-of-espresso","title":"Portable test to evaluate performance of ESPResSo","text":"

To evaluate the performance of ESPResSo, we have implemented a portable test for ESPResSo in the EESSI test suite; the results shown here were collected using version 0.3.2.

After installing and configuring the EESSI test suite on Vega, Karolina, and Deucalion, running the Lennard-Jones (LJ) test case with ESPResSo 4.2.2 available in EESSI can be done with:

reframe --name \"ESPRESSO_LJ.*%module_name=ESPResSo/4.2.2\"\n

This will automatically run the LJ test case with ESPResSo across all known scales in the EESSI test suite, which range from single core up to 8 full nodes.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#performance-scalability-results-on-vega-karolina-deucalion","title":"Performance + scalability results on Vega, Karolina, Deucalion","text":"

The performance results of the tests are collected by ReFrame in a detailed JSON report.

The parallel performance of ESPResSo, expressed in particles integrated per second, scales linearly with the number of cores. On Vega using 8 nodes (1024 MPI ranks, one per physical core), ESPResSo 4.2.2 can integrate the equations of motion of roughly 615 million particles every second. On Deucalion using 8 nodes (384 cores), we observe a performance of roughly 62 million particles integrated per second.

Plotting the parallel efficiency of ESPResSo 4.2.2 (weak scaling, 2000 particles per MPI rank) on the three EuroHPC systems we used shows that it decreases approximately linearly with the logarithm of the number of cores.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#running-espresso-on-deucalion-via-eessi-cvmfsexec","title":"Running ESPResSo on Deucalion via EESSI + cvmfsexec","text":"

While EESSI is already available system-wide on both Vega and Karolina for some time (see here and here for more information, respectively), it was not available yet on Deucalion when these performance experiments were run.

Nevertheless, we were able to leverage the optimized installation of ESPResSo for A64FX that is available in EESSI since 27 June 2024, by leveraging the cvmfsexec tool, and by creatively implementing two simple shell wrapper scripts.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#cvmfsexec-wrapper-script","title":"cvmfsexec wrapper script","text":"

The first wrapper script cvmfsexec_eessi.sh can be used to run a command in a subshell in which the EESSI CernVM-FS repository (software.eessi.io) is mounted via cvmfsexec. This script can be used by regular users on Deucalion, it does not require any special privileges beyond the Linux kernel features that cvmfsexec leverages, like user namespaces.

Contents of ~/bin/cvmfsexec_eessi.sh:

#!/bin/bash\nif [ -d /cvmfs/software.eessi.io ]; then\n    # run command directly, EESSI CernVM-FS repository is already mounted\n    \"$@\"\nelse\n    # run command via in subshell where EESSI CernVM-FS repository is mounted,\n    # via cvmfsexec which is set up in a unique temporary directory\n    orig_workdir=$(pwd)\n    mkdir -p /tmp/$USER\n    tmpdir=$(mktemp -p /tmp/$USER -d)\n    cd $tmpdir\n    git clone https://github.com/cvmfs/cvmfsexec.git > $tmpdir/git_clone.out 2>&1\n    cd cvmfsexec\n    ./makedist default > $tmpdir/cvmfsexec_makedist.out 2>&1\n    cd $orig_workdir\n    $tmpdir/cvmfsexec/cvmfsexec software.eessi.io -- \"$@\"\n    # cleanup\n    rm -rf $tmpdir\nfi\n

Do make sure that this script is executable:

chmod u+x ~/bin/cvmfsexec_eessi.sh\n

A simple way to test this script is to use it to inspect the contents of the EESSI repository:

~/bin/cvmfsexec_eessi.sh ls /cvmfs/software.eessi.io\n

or to start an interactive shell in which the EESSI repository is mounted:

~/bin/cvmfsexec_eessi.sh /bin/bash -l\n

The job scripts that were submitted by ReFrame on Deucalion leverage cvmfsexec_eessi.sh to set up the environment and get access to the ESPResSo v4.2.2 installation that is available in EESSI (see below).

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#orted-wrapper-script","title":"orted wrapper script","text":"

In order to get multi-node runs of ESPResSo working without having EESSI available system-wide, we also had to create a small wrapper script for the orted command that is used by Open MPI to start processes on remote nodes. This is necessary because mpirun launches orted, which must be run in an environment in which the EESSI repository is mounted. If not, MPI startup will fail with an error like \"error: execve(): orted: No such file or directory\".

This wrapper script must be named orted, and must be located in a path that is listed in $PATH.

We placed it in ~/bin/orted, and add export PATH=$HOME/bin:$PATH to our ~/.bashrc login script.

Contents of ~/bin/orted:

#!/bin/bash\n\n# first remove path to this orted wrapper from $PATH, to avoid infinite loop\norted_wrapper_dir=$(dirname $0)\nexport PATH=$(echo $PATH | tr ':' '\\n' | grep -v $orted_wrapper_dir | tr '\\n' ':')\n\n~/bin/cvmfsexec_eessi.sh orted \"$@\"\n

Do make sure that also this orted wrapper script is executable:

chmod u+x ~/bin/orted\n

If not, you will likely run into an error that starts with:

An ORTE daemon has unexpectedly failed after launch ...\n

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#slurm-job-script","title":"Slurm job script","text":"

We can use the cvmfsexec_eessi.sh script in a Slurm job script on Deucalion to initialize the EESSI environment in a subshell in which the EESSI CernVM-FS repository is mounted, and subsequently load the module for ESPResSo v4.2.2 and launch the Lennard-Jones fluid simulation via mpirun:

Job script (example using 2 full 48-core nodes on A64FX partition of Deucalion):

#!/bin/bash\n#SBATCH --ntasks=96\n#SBATCH --ntasks-per-node=48\n#SBATCH --cpus-per-task=1\n#SBATCH --time=5:0:0\n#SBATCH --partition normal-arm\n#SBATCH --export=None\n#SBATCH --mem=30000M\n~/bin/cvmfsexec_eessi.sh << EOF\nexport EESSI_SOFTWARE_SUBDIR_OVERRIDE=aarch64/a64fx\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ESPResSo/4.2.2-foss-2023a\nexport SLURM_EXPORT_ENV=HOME,PATH,LD_LIBRARY_PATH,PYTHONPATH\nmpirun -np 96 python3 lj.py\nEOF\n

(the lj.py Python script is available in the EESSI test suite, see here)

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/","title":"Extrae available in EESSI","text":"

Thanks to the work developed under MultiXscale CoE we are proud to announce that as of 22 July 2024, Extrae v4.2.0 is available in the EESSI production repository software.eessi.io, optimized for the 8 CPU targets that are fully supported by version 2023.06 of EESSI. This allows using Extrae effortlessly on the EuroHPC systems where EESSI is already available, like Vega and Karolina.

It is worth noting that from that date Extrae is also available in the EESSI RISC-V repository risv.eessi.io.

Extrae is a package developed at BSC devoted to generate Paraver trace-files for a post-mortem analysis of applications performance. Extrae is a tool that uses different interposition mechanisms to inject probes into the target application so as to gather information regarding the application performance. It is one of the tools used in the POP3 CoE.

The work to incorporate Extrae into EESSI started early in May. It took quite some time and effort but has resulted in a number of updates, improvements and bug fixes for Extrae. The following sections explain the work done describing the issues encountered and the solutions adopted.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#adapting-eessi-software-layer","title":"Adapting EESSI software layer","text":"

During the first attempt to build Extrae (in this case v4.0.6) in the EESSI context, we found out two issues:

  1. the configure script of Extrae was not able to find binutils in the location it is provided by the compat layer of EESSI, and
  2. the configure/make files of Extrae make use of which command that does not work in our build container.

Both problems were solved by adding a pre_configure_hook in the eb_hooks.py file of the EESSI software layer that:

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#moving-to-version-416","title":"Moving to version 4.1.6","text":"

By the time we completed this work, v4.1.6 of Extrae was available so we decided to switch to that version as v4.0.6 was throwing errors in the test suite provided by Extrae through the make checkcommand.

When first trying to build this new version, we noticed that there were still problems with binutils detection because the configure scripts of Extrae assume that the binutils libraries are under a lib directory in the provided binutils path while in the EESSI compat layer they are directly in the provided directory (i.e. without the /lib). This was solved with a patch file committed to the EasyBuild easyconfigs repository, that modifies both configure and config/macros.m4 to make binutils detection more robust. This patch was also provided to Extrae developers to incorporate into future releases.

The next step was to submit a Pull Request to the EasyBuild easyblocks repository with some modifications to the extrae.py easyblock that:

With all of this in place, we managed to correctly build Extrae but found out that many tests failed to pass, including all 21 under the MPI directory. We reported this fact to Extrae developers who answered that there was a critical bug fix related to MPI tracing in version 4.1.7 so we switched to that version before continuing our work.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#work-with-version-417","title":"Work with version 4.1.7","text":"

We tested the build of that version (of course including all the work done before for previous versions) and we still saw some errors in the make checkphase. We focused first in the following 3:

Regarding the first one, we found a bug in the Extrae test itself: the mpi_comm_ranksize_f_1proc.sh was invoking trace-ldpreload.sh instead of the Fortran version trace-ldpreloadf.sh and this caused the test to fail. We submitted a Pull Request to the Extrae repository with the bugfix that has been already merged and incorporated into new releases.

Regarding the second one, it was reported to Extrae developers as an issue. They suggested commenting out a call at src/tracer/wrappers/pthread/pthread_wrapper.c at line 240: //Backend_Flush_pThread (pthread_self());. We reported that this fixed the issue so this change has also been incorporated into the Extrae main branch for future releases.

The last failing test was an issue related with the access to HW counters on the building/testing system. The problem was that the test assumed that Extrae (through PAPI) can access HW counters (in this case, PAPI_TOT_INS). This might not be the case because this is very system-dependent (since it involves permissions, etc). As a solution, we committed a patch to the Extrae repository which ensured that the test will not fail if PAPI_TOT_CYC is unavailable in the testing system. As this has not been incorporated yet into the Extrae repository, we also committed a patch file to the EasyBuild easyconfigs repository that solves the problem with this specific test but also with others that suffered from this same issue.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#finally-version-420","title":"Finally, version 4.2.0","text":"

Due to the bugfixes mentioned in previous section that were incorporated into the Extrae repository, we switched again to an updated version of Extrae (in this case v4.2.0). With that updated version and the easyconfig (and patches) and easyblock modifications tests started to pass successfully in most of the testing platforms.

We noticed, however, that Extrae produced segmentation faults when using libunwind in ARM architectures. Our approach to that was to report the issue to Extrae developers and to make this dependency architecture specific (i.e. forcing --without-unwind when building for ARM while keeping the dependency for the rest of architectures). We did this in a Pull Request to the EasyBuild easyconfigs repository that is already merged. In this same Pull Request we added zlib as an explicit dependency in the easyconfig file for all architectures.

The last issue we encountered was similar to the previous one but in this case was seen on some RISC-V platforms and related to dynamic memory instrumentation. We adopted the same approach and reported the issue to Extrae developers and added --disable-instrument-dynamic-memory to the configure options in a Pull Request already merged into the EasyBuild-Easyconfigs repository.

With that, all tests passed in all platforms and we were able to incorporate Extrae to the list of software available in both the software.eessi.io and riscv.eessi.io repositories of EESSI.

"},{"location":"blog/2024/09/20/hpcwire-readers-choice-awards-2024/","title":"EESSI nominated for HPCwire Readers\u2019 Choice Awards 2024","text":"

EESSI has been nominated for the HPCwire Readers\u2019 Choice Awards 2024, in the \"Best HPC Programming Tool or Technology\" category.

You can help us win the award by joining the vote.

To vote, you should:

  1. Fill out and submit the form to register yourself as an HPCWire reader and access your ballot;
  2. Access your ballot here;
  3. Select your favorite in one or more categories;
  4. Submit your vote by filling in your name, organisation, and email address (to avoid ballot stuffing), and hitting the Done button.

Note that you are not required to vote for all categories, you can opt for only voting for one particular nominee in only one of the categories.

For example, you could vote for European Environment for Scientific Software Installations (EESSI) in category 13: Best HPC Programming Tool or Technology.

"},{"location":"blog/2024/10/11/ci-workflow-for-EESSI/","title":"An example CI workflow that leverages EESSI CI tools","text":"

EESSI's CI workflows are available on GitHub Actions and as a GitLab CI/CD component. Enabling this is as simple as adding EESSI's CI to your workflow of choice, giving you access to the entire EESSI software stack optimized for the relevant CPU architecture(s) in your runner's environment. If you are developing an application on top of the EESSI software stack, for example, this means you don't need to invest heavily in configuring and maintaining a CI setup: EESSI does that for you so you can focus on your code. With the EESSI CI workflows you don't have to worry about figuring out how to optimize build and runtime dependencies as these will be streamed seamlessly to your runner's environment.

"},{"location":"blog/2024/10/11/ci-workflow-for-EESSI/#using-the-ci-component-in-gitlab","title":"Using the CI component in GitLab","text":"

To showcase this, let's create a simple R package that just outputs a map of the European Union and Norway, and colours the participating countries in the MultiXscale CoE.

We'll make a package eessirmaps that relies on popular R packages ggplot2, sf, and rnaturalearth to render and save this map. Installing GIS tools for R can be somewhat cumbersome, which could become trickier if it has to be done in a CI environment. This is because sf requires system packages libgdal-dev and libproj-dev, which would add yet another step, complicating our CI workflow. Thankfully, EESSI makes a lot of the packages dependencies available to us from the start, as well as a fully functioning version of R, and the necessary package dependencies to boot! As far as setup goes, this results in a simple CI workflow:

include:\n  - component: $CI_SERVER_FQDN/eessi/gitlab-eessi/eessi@1.0.5\n\nbuild:\n  stage: build\n  artifacts:\n    paths:\n      - msx_map.png\n  script:\n    # Create directory for personal R library\n    - mkdir $CI_BUILDS_DIR/R\n    - export R_LIBS_USER=$CI_BUILDS_DIR/R\n    # Load the R module from EESSI\n    - module load R-bundle-CRAN/2023.12-foss-2023a\n    # Install eessirmaps, the rnaturalearth dep and create the plot\n    - R -e \"install.packages('rnaturalearthdata', repos = 'https://cran.rstudio.com/');\n      remotes::install_gitlab('neves-p/eessirmaps', upgrade = FALSE);\n      eessirmaps::multixscale_map(); ggplot2::ggsave('msx_map.png', bg = 'white')\"\n

Note how we simply include the EESSI GitLab CI component and set up a blank directory for our user R libraries. Remember, because of EESSI, the environment that you develop in will be exactly the same as the one the CI is run in. Apart from the rnaturalearthdata R package, all the other dependencies are taken care of by the R-bundle-CRAN/2023.12-foss-2023a EESSI module. This is true for the system and R package dependencies.

Then we simply have to install our package to the CI environment and call the multixscale_map() function to produce the plot, which is uploaded as an artifact from the CI environment. We can then retrieve the artifact archive, unpack it and obtain the map.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/","title":"EuroHPC User Day (22-23 Oct 2024, Amsterdam)","text":"

We had a great time at the EuroHPC User Day 2024 in Amsterdam earlier this week.

Both MultiXscale and EESSI were strongly represented, and the work we have been doing was clearly being appreciated.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#visit-to-surf-snellius-at-amsterdam-science-park","title":"Visit to SURF & Snellius at Amsterdam Science Park","text":"

Most of us arrived in the afternoon the day before the event, which gave us the chance to visit SURF on-site.

We had a short meeting there with the local team about how we could leverage Snellius, the Dutch national supercomputer, for building and testing software installations for EESSI.

We also got to visit the commercial datacenter at the Amsterdam Science Park (which will soon also host a European quantum computer!) and see Snellius up close, where we took a nice selfie.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#presentation-on-multixscale-and-eessi","title":"Presentation on MultiXscale and EESSI","text":"

After the very interesting first EuroHPC User Day in Brussels in December 2023, where MultiXscale and EESSI were mentioned as \"being well-aligned with the vision of EuroHPC JU\", we wanted to have a stronger presence at the second EuroHPC User Day in Amsterdam.

We submitted a paper entitled \"Portable test run of ESPResSo on EuroHPC systems via EESSI\" which was based on an earlier blog post we did in June 2024. Our submission was accepted, and hence the paper will be included in the upcoming proceedings of the 2nd EuroHPC User Day.

As a result, we were invited to present MultiXscale and more specifically the EESSI side of the project during one of the parallel sessions: HPC ecosystem tools. The slides of this presentation are available here.

During the Q&A after our talk various attendees asked interesting questions about specific aspects of EESSI, including:

Some attendees also provided some nice feedback on their initial experience with EESSI:

Quote by one of the attendees of the MultiXscale talk

It's very easy to install and configure CernVM-FS to provide access to EESSI based on the available documentation.

Any sysadmin can do it: it took me half a day, and that was mostly due to my own stupidity.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#mentioning-of-multixscale-and-eessi-by-other-speakers","title":"Mentioning of MultiXscale and EESSI by other speakers","text":"

It was remarkable and satisfying to see that MultiXscale and EESSI were being mentioned several times through the event, often by people and organisations who are not actively involved with either project. Clearly the word is starting to spread on the work we are doing!

Valeriu Codreanu (head of High-Performance Computing and Visualization at SURF) had some nice comments to share during his opening statement of the event about their involvement in MultiXscale and EESSI, and why a well-designed shared stack of optimized software installations is really necessary.

When an attendee of one of the plenary sessions raised a question on a lack of a uniform software stack across EuroHPC systems, Lilit Axner (Programme Manager Infrastructure at EuroHPC JU) answered that a federated platform for EuroHPC systems is currently in the works, and that more news will be shared soon on this.

In the short presentation on the EuroHPC JU system Vega we got explicitly mentioned again, alongside CernVM-FS and EasyBuild which are both used in the EESSI project.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#live-demo-of-eessi-at-walk-in-networking-session","title":"Live demo of EESSI at walk-in networking session","text":"

On Wednesday, the MultiXscale project was part of the walk-in networking session Application Support, Training and Skills.

During this session we were running a live demonstration of a small Plane Poiseuille flow simulation with ESPResSo.

The software was being provided via EESSI, and we were running the simulation on various hardware platforms, including:

Attendees could participate in a contest to win a Raspberry Pi 5 starter kit by filling out a form and answering a couple of questions related to MultiXscale.

At the end of the session we did a random draw among the participants who answered the questions correctly, and Giorgos Kosta (CaSToRC - The Cyprus Institute) came out as the lucky winner!

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#eurohpc-user-forum","title":"EuroHPC User Forum","text":"

Last but not least, the EuroHPC User Forum was being presented during a plenary session.

Attendees were invited to connect with the EuroHPC User Forum representatives and each other via the dedicated Slack that has been created for it.

Lara Peeters, who is also active in MultiXscale EuroHPC Centre-of-Excellence, is part of the EuroHPC User Forum, representing Digital Humanities.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#eurohpc-user-day-2025-in-denmark","title":"EuroHPC User Day 2025 in Denmark","text":"

We are already looking forward to engaging with the EuroHPC user community next year in Denmark!

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/","title":"EESSI won an HPCWire Reader's Choice Award!","text":"

We are thrilled to announce that EESSI has won an HPCWire Reader's Choice Award!

EESSI received the most votes from the HPC community in the \"Best HPC Programming Tool or Technology\" category, despite the fierce competition of others projects that got nominated in this category.

This news was revealed at the Supercomputing 2024 (SC'24) conference in Atlanta (US).

Thank you very much if you voted for us!

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#award-ceremony","title":"Award ceremony","text":"

A modest award ceremony was held at the Do IT Now booth on the SC'24 exhibit floor, since HPCNow! (part of the Do IT Now Group) is a partner in the MultiXscale EuroHPC Centre-of-Excellence.

The handover of the award plaque was done by Tom Tabor, CEO of Tabor Communications, Inc., the publisher of HPCWire.

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#picture-at-eurohpc-ju-booth","title":"Picture at EuroHPC JU booth","text":"

It is important to highlight that the funding provided by the EuroHPC JU to the MultiXscale Centre-of-Excellence has been a huge catalyst in the last couple of years for EESSI, which forms the technical pillar of MultiXscale.

Anders Dam Jensen, CEO of EuroHPC JU, and Daniel Opalka, head of Research & Innovation at EuroHPC JU, were more than happy to take a commemorative picture at the EuroHPC JU booth, together with representatives of some of the MultiXscale partners (Ghent University, HPCNow!, and SURF).

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#more-info","title":"More info","text":"

For more information about EESSI, check out our website: https://eessi.io.

"},{"location":"filesystem_layer/stratum1/","title":"Setting up a Stratum 1","text":"

The EESSI project provides a number of geographically distributed public Stratum 1 servers that you can use to make EESSI available on your machine(s). It is always recommended to have a local caching layer consisting of a few Squid proxies. If you want to be even better protected against network outages and increase the bandwidth between your cluster nodes and the Stratum 1 servers, you could also consider setting up a local (private) Stratum 1 server that replicates the EESSI CVMFS repository. This guarantees that you always have a full and up-to-date copy of the entire stack available in your local network.

"},{"location":"filesystem_layer/stratum1/#requirements-for-a-stratum-1","title":"Requirements for a Stratum 1","text":"

The main requirements for a Stratum 1 server are a good network connection to the clients it is going to serve, and sufficient disk space. As the EESSI repository is constantly growing, make sure that the disk space can easily be extended if necessary. Currently, we recommend to have at least 1 TB available.

In terms of cores and memory, a machine with just a few (~4) cores and 4-8 GB of memory should suffice.

Various Linux distributions are supported, but we recommend one based on RHEL 8 or 9.

Finally, make sure that ports 80 and 8000 are open to clients.

"},{"location":"filesystem_layer/stratum1/#configure-the-stratum-1","title":"Configure the Stratum 1","text":"

Stratum 1 servers have to synchronize the contents of their CVMFS repositories regularly, and usually they replicate from a CVMFS Stratum 0 server. In order to ensure the stability and security of the EESSI Stratum 0 server, it has a strict firewall, and only the EESSI-maintained public Stratum 1 servers are allowed to replicate from it. However, EESSI provides a synchronisation server that can be used for setting up private Stratum 1 replica servers, and this is available at http://aws-eu-west-s1-sync.eessi.science.

Warn

In the past we have seen a few occurrences of data transfer issues when files were being pulled in by or from a Stratum 1 server. In such cases the cvmfs_server snapshot command, used for synchronizing the Stratum 1, may break with errors like failed to download <URL to file>. Trying to manually download the mentioned file with curl will also not work, and result in errors like:

curl: (56) Recv failure: Connection reset by peer\n
In all cases this was due to an intrusion prevention system scanning the associated network, and hence scanning all files going in or out of the Stratum 1. Though it was a false-positive in all cases, this breaks the synchronization procedure of your Stratum 1. If this is the case, you can try switching to HTTPS by using https://aws-eu-west-s1-sync.eessi.science for synchronizing your Stratum 1. Even though there is no advantage for CVMFS itself in using HTTPS (it has built-in mechanisms for ensuring the integrity of the data), this will prevent the described issues, as the intrusion prevention system will not be able to inspect the encrypted data. However, not only does HTTPS introduce some overhead due to the encryption/decryption, it also makes caching in forward proxies impossible. Therefore, it is strongly discouraged to use HTTPS as default.

"},{"location":"filesystem_layer/stratum1/#manual-configuration","title":"Manual configuration","text":"

In order to set up a Stratum 1 manually, you can make use of the instructions in the Private Stratum 1 replica server section of the MultiXscale tutorial \"Best Practices for CernVM-FS in HPC\".

"},{"location":"filesystem_layer/stratum1/#configuration-using-ansible","title":"Configuration using Ansible","text":"

The recommended way for setting up an EESSI Stratum 1 is by running the Ansible playbook stratum1.yml from the filesystem-layer repository on GitHub. For the commands in this section, we are assuming that you cloned this repository, and your working directory is filesystem-layer.

Note

Installing a Stratum 1 usually requires a GEO API license key, which will be used to find the (geographically) closest Stratum 1 server for your client and proxies. However, for a private Stratum 1 this can be skipped, and you can disable the use of the GEO API in the configuration of your clients by setting CVMFS_USE_GEOAPI=no. In this case, they will just connect to your local Stratum 1 by default.

If you do want to set up the GEO API, you can find more information on how to (freely) obtain this key in the CVMFS documentation: https://cvmfs.readthedocs.io/en/stable/cpt-replica.html#geo-api-setup.

You can put your license key in the local configuration file inventory/local_site_specific_vars.yml with the variables cvmfs_geo_license_key and cvmfs_geo_account_id.

Start by installing Ansible, e.g.:

sudo yum install -y ansible\n

Then install Ansible roles for EESSI:

ansible-galaxy role install -r ./requirements.yml --force\n

Make sure you have enough space in /srv on the Stratum 1, since the snapshots of the repositories will end up there by default. To alter the directory where the snapshots get stored you can manually create a symlink before running the playbook:

sudo ln -s /lots/of/space/cvmfs /srv/cvmfs\n

Also make sure that: - you are able to log in to the server from the machine that is going to run the playbook (preferably using an SSH key); - you can use sudo on this machine; - you add the hostname or IP address of your server to a cvmfsstratum1servers section in the inventory/hosts file, e.g.:

[cvmfsstratum1servers]\n12.34.56.789 ansible_ssh_user=yourusername\n

Finally, install the Stratum 1 using:

# -b to run as root, optionally use -K if a sudo password is required, and optionally include your site-specific variables\nansible-playbook -b [-K] [-e @inventory/local_site_specific_vars.yml] stratum1.yml\n
Running the playbook will automatically make replicas of all the EESSI repositories defined in inventory/group_vars/all.yml. If you only want to replicate the main software repository (software.eessi.io), you can remove the other ones from the eessi_cvmfs_repositories list in this file.

"},{"location":"filesystem_layer/stratum1/#verification-of-the-stratum-1-using-curl","title":"Verification of the Stratum 1 using curl","text":"

When the playbook has finished, your Stratum 1 should be ready. In order to test your Stratum 1, even without a client installed, you can use curl:

curl --head http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io/.cvmfspublished\n
This should return something like:

HTTP/1.1 200 OK\n...\nContent-Type: application/x-cvmfs\n

Example with the EESSI Stratum 1 running in AWS:

curl --head http://aws-eu-central-s1.eessi.science/cvmfs/software.eessi.io/.cvmfspublished\n
"},{"location":"filesystem_layer/stratum1/#verification-of-the-stratum-1-using-a-cvmfs-client","title":"Verification of the Stratum 1 using a CVMFS client","text":"

You can, of course, also test access to your Stratum 1 from a client. This requires you to install a CernVM-FS client and add the Stratum 1 to the client configuration; this is explained in more detail on the native installation page.

Then verify that the client connects to your new Stratum 1 by running:

cvmfs_config stat -v software.eessi.io\n

Assuming that your new Stratum 1 is working properly, this should return something like:

Connection: http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io through proxy DIRECT (online)\n
"},{"location":"getting_access/eessi_container/","title":"EESSI container script","text":"

The eessi_container.sh script provides a very easy yet versatile means to access EESSI. It is the preferred method to start an EESSI container as it has support for many different scenarios via various options.

This page guides you through several example scenarios illustrating the use of the script.

"},{"location":"getting_access/eessi_container/#prerequisites","title":"Prerequisites","text":""},{"location":"getting_access/eessi_container/#preparation","title":"Preparation","text":"

Clone the EESSI/software-layer repository and change into the software-layer directory by running these commands:

git clone https://github.com/EESSI/software-layer.git\ncd software-layer\n
"},{"location":"getting_access/eessi_container/#quickstart","title":"Quickstart","text":"

Run the eessi_container script (from the software-layer directory) to start a shell session in the EESSI container:

./eessi_container.sh\n

Note

Startup will take a bit longer the first time you run this because the container image is downloaded and converted.

You should see output like

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell  --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nApptainer> CernVM-FS: loading Fuse module... done\nCernVM-FS: loading Fuse module... done\n\nApptainer>\n

Note

You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS: have been printed after the first prompt Apptainer> was shown.

To start using EESSI, see Using EESSI/Setting up your environment.

"},{"location":"getting_access/eessi_container/#help-for-eessi_containersh","title":"Help for eessi_container.sh","text":"

The example in the Quickstart section facilitates an interactive session with read access to the EESSI software stack. It does not require any command line options, because the script eessi_container.sh uses some carefully chosen defaults. To view all options of the script and its default values, run the command

./eessi_container.sh --help\n
You should see the following output
usage: ./eessi_container.sh [OPTIONS] [[--] SCRIPT or COMMAND]\n OPTIONS:\n  -a | --access {ro,rw}  - ro (read-only), rw (read & write) [default: ro]\n  -c | --container IMG   - image file or URL defining the container to use\n                           [default: docker://ghcr.io/eessi/build-node:debian11]\n  -g | --storage DIR     - directory space on host machine (used for\n                           temporary data) [default: 1. TMPDIR, 2. /tmp]\n  -h | --help            - display this usage information [default: false]\n  -i | --host-injections - directory to link to for host_injections \n                           [default: /..storage../opt-eessi]\n  -l | --list-repos      - list available repository identifiers [default: false]\n  -m | --mode MODE       - with MODE==shell (launch interactive shell) or\n                           MODE==run (run a script or command) [default: shell]\n  -n | --nvidia MODE     - configure the container to work with NVIDIA GPUs,\n                           MODE==install for a CUDA installation, MODE==run to\n                           attach a GPU, MODE==all for both [default: false]\n  -r | --repository CFG  - configuration file or identifier defining the\n                           repository to use [default: EESSI via\n                           container configuration]\n  -u | --resume DIR/TGZ  - resume a previous run from a directory or tarball,\n                           where DIR points to a previously used tmp directory\n                           (check for output 'Using DIR as tmp ...' of a previous\n                           run) and TGZ is the path to a tarball which is\n                           unpacked the tmp dir stored on the local storage space\n                           (see option --storage above) [default: not set]\n  -s | --save DIR/TGZ    - save contents of tmp directory to a tarball in\n                           directory DIR or provided with the fixed full path TGZ\n                           when a directory is provided, the format of the\n                           tarball's name will be {REPO_ID}-{TIMESTAMP}.tgz\n                           [default: not set]\n  -v | --verbose         - display more information [default: false]\n  -x | --http-proxy URL  - provides URL for the env variable http_proxy\n                           [default: not set]; uses env var $http_proxy if set\n  -y | --https-proxy URL - provides URL for the env variable https_proxy\n                           [default: not set]; uses env var $https_proxy if set\n\n If value for --mode is 'run', the SCRIPT/COMMAND provided is executed. If\n arguments to the script/command start with '-' or '--', use the flag terminator\n '--' to let eessi_container.sh stop parsing arguments.\n

So, the defaults are equal to running the command

./eessi_container.sh --access ro --container docker://ghcr.io/eessi/build-node:debian11 --mode shell --repository EESSI\n
and it would either create a temporary directory under ${TMPDIR} (if defined), or /tmp (if ${TMPDIR} is not defined).

The remainder of this page will demonstrate different scenarios using some of the command line options used for read-only access.

Other options supported by the script will be discussed in a yet-to-be written section covering building software to be added to the EESSI stack.

"},{"location":"getting_access/eessi_container/#resuming-a-previous-session","title":"Resuming a previous session","text":"

You may have noted the following line in the output of eessi_container.sh

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\n

Note

The parameter after --resume (/tmp/eessi.abc123defg) will be different when you run eessi_container.sh.

Scroll back in your terminal and copy it so you can pass it to --resume.

Try the following command to \"resume\" from the last session.

./eessi_container.sh --resume /tmp/eessi.abc123defg\n
This should run much faster because the container image has been cached in the temporary directory (/tmp/eessi.abc123defg). You should get to the prompt (Apptainer> or Singularity>) and can use EESSI with the state where you left the previous session.

Note

The state refers to what was stored on disk, not what was changed in memory. Particularly, any environment (variable) settings are not restored automatically.

Because the /tmp/eessi.abc123defg directory contains a home directory which includes the saved history of your last session, you can easily restore the environment (variable) settings. Type history to see which commands you ran. You should be able to access the history as you would do in a normal terminal session.

"},{"location":"getting_access/eessi_container/#running-a-simple-command","title":"Running a simple command","text":"

Let's \"ls /cvmfs/software.eessi.io\" through the eessi_container.sh script to check if the CernVM-FS EESSI repository is accessible:

./eessi_container.sh --mode run ls /cvmfs/software.eessi.io\n

You should see an output such as

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell  --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nhost_injections  latest  versions\n

Note that this time no interactive shell session is started in the container: only the provided command is run in the container, and when that finishes you are back in the shell session where you ran the eessi_container.sh script.

This is because we used the --mode run command line option.

Note

The last line in the output is the output of the ls command, which shows the contents of the /cvmfs/software.eessi.io directory.

Also, note that there is no shell prompt (Apptainer> or Singularity), since no interactive shell session is started in the container.

Alternatively to specify the command as we did above, you can also do the following.

CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh --mode shell <<< ${CMD}\n

Note

We changed the mode from run to shell because we use a different method to let the script run our command, by feeding it in via the stdin input channel using <<<.

Because shell is the default value for --mode we can also omit this and simply run

CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n

"},{"location":"getting_access/eessi_container/#running-a-script","title":"Running a script","text":"

While running simple command can be sufficient in some cases, you often want to run scripts containing multiple commands.

Let's run the script shown below.

First, copy-paste the contents for the script shown below, and create a file named eessi_architectures.sh in your current directory. Also make the script executable, by running:

chmod +x eessi_architectures.sh\n

Here are the contents for the eessi_architectures.sh script:

#!/usr/bin/env bash\n#\n# This script determines which architectures are included in the\n# latest EESSI version. It makes use of the specific directory\n# structure in the EESSI repository.\n#\n\n# determine list of available OS types\nBASE=${EESSI_CVMFS_REPO:-/cvmfs/software.eessi.io}/latest/software\ncd ${BASE}\nfor os_type in $(ls -d *)\ndo\n    # determine architecture families\n    OS_BASE=${BASE}/${os_type}\n    cd ${OS_BASE}\n    for arch_family in $(ls -d *)\n    do\n        # determine CPU microarchitectures\n        OS_ARCH_BASE=${BASE}/${os_type}/${arch_family}\n        cd ${OS_ARCH_BASE}\n        for microarch in $(ls -d *)\n        do\n            case ${microarch} in\n                amd | intel )\n                    for sub in $(ls ${microarch})\n                    do\n                        echo \"${os_type}/${arch_family}/${microarch}/${sub}\"\n                    done\n                    ;;\n                * )\n                    echo \"${os_type}/${arch_family}/${microarch}\"\n                    ;;\n            esac\n        done\n    done\ndone\n
Run the script as follows
./eessi_container.sh --mode shell < eessi_architectures.sh\n
The output should be similar to
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nlinux/aarch64/generic\nlinux/aarch64/graviton2\nlinux/aarch64/graviton3\nlinux/ppc64le/generic\nlinux/ppc64le/power9le\nlinux/x86_64/amd/zen2\nlinux/x86_64/amd/zen3\nlinux/x86_64/generic\nlinux/x86_64/intel/haswell\nlinux/x86_64/intel/skylake_avx512\n
Lines 6 to 15 show the output of the script eessi_architectures.sh.

If you want to use the mode run, you have to make the script's location available inside the container.

This can be done by mapping the current directory (${PWD}), which contains eessi_architectures.sh, to any not-yet existing directory inside the container using the $SINGULARITY_BIND or $APPTAINER_BIND environment variable.

For example:

SINGULARITY_BIND=${PWD}:/scripts ./eessi_container.sh --mode run /scripts/eessi_architectures.sh\n

"},{"location":"getting_access/eessi_container/#running-scripts-or-commands-with-parameters-starting-with-or-","title":"Running scripts or commands with parameters starting with - or --","text":"

Let's assume we would like to get more information about the entries of /cvmfs/software.eessi.io. If we would just run

./eessi_container.sh --mode run ls -lH /cvmfs/software.eessi.io\n
we would get an error message such as
ERROR: Unknown option: -lH\n
We can resolve this in two ways:

  1. Using the stdin channel as described above, for example, by simply running
    CMD=\"ls -lH /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
    which should result in the output similar to
    Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user   10 Jun 30  2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user   16 May  4  2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10  2021 versions\n
  2. Using the flag terminator -- which tells eessi_container.sh to stop parsing command line arguments. For example,
    ./eessi_container.sh --mode run -- ls -lH /cvmfs/software.eessi.io\n
    which should result in the output similar to
    Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q run --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif ls -lH /cvmfs/software.eessi.io\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user   10 Jun 30  2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user   16 May  4  2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10  2021 versions\n
"},{"location":"getting_access/eessi_container/#running-eessi-demos","title":"Running EESSI demos","text":"

For examples of scripts that use the software provided by EESSI, see Running EESSI demos.

"},{"location":"getting_access/eessi_container/#launching-containers-more-quickly","title":"Launching containers more quickly","text":"

Subsequent runs of eessi_container.sh may reuse temporary data of a previous session, which includes the pulled image of the container. However, that is not always what we want, i.e., reusing a previous session (and thereby launching the container more quickly).

The eessi_container.sh script may (re)-use a cache directory provided via $SINGULARITY_CACHEDIR (or $APPTAINER_CACHEDIR when using Apptainer). Hence, the container image does not have to be downloaded again even when starting a new session. The example below illustrates this.

export SINGULARITY_CACHEDIR=${PWD}/container_cache_dir\ntime ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
which should produce output similar to
Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections  latest  versions\n\nreal    m40.445s\nuser    3m2.621s\nsys     0m7.402s\n
The next run using the same cache directory, e.g., by simply executing
time ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
is much faster
Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections  latest  versions\n\nreal    0m2.781s\nuser    0m0.172s\nsys     0m0.436s\n

Note

Each run of eessi_container.sh (without specifying --resume) creates a new temporary directory. The temporary directory stores, among other data, the image file of the container. Thus we can ensure that the container is available locally for a subsequent run.

However, this may quickly consume scarce resources, for example, a small partition where /tmp is located (default for temporary storage, see --help for specifying a different location).

See next section for making sure to clean up no longer needed temporary data.

"},{"location":"getting_access/eessi_container/#reducing-disk-usage","title":"Reducing disk usage","text":"

By default eessi_container.sh creates a temporary directory under /tmp. The directories are named eessi.RANDOM where RANDOM is a 10-character string. The script does not automatically remove these directories. To determine their total disk usage, simply run

du -sch /tmp/eessi.*\n
which could result in output similar to
333M    /tmp/eessi.session123\n333M    /tmp/eessi.session456\n333M    /tmp/eessi.session789\n997M    total\n
Clean up disk usage by simply removing directories you do not need any longer.

"},{"location":"getting_access/eessi_container/#eessi-container-image","title":"EESSI container image","text":"

If you would like to directly use an EESSI container image, you can do so by configuring apptainer to correctly mount the CVMFS repository:

# honor $TMPDIR if it is already defined, use /tmp otherwise\nif [ -z $TMPDIR ]; then\n    export WORKDIR=/tmp/$USER\nelse\n    export WORKDIR=$TMPDIR/$USER\nfi\n\nmkdir -p ${WORKDIR}/{var-lib-cvmfs,var-run-cvmfs,home}\nexport SINGULARITY_BIND=\"${WORKDIR}/var-run-cvmfs:/var/run/cvmfs,${WORKDIR}/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"${WORKDIR}/home:/home/$USER\"\nexport EESSI_REPO=\"container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io\"\nexport EESSI_CONTAINER=\"docker://ghcr.io/eessi/client:centos7\"\nsingularity shell --fusemount \"$EESSI_REPO\" \"$EESSI_CONTAINER\"\n
"},{"location":"getting_access/eessi_limactl/","title":"Installing EESSI with Lima on MacOS","text":""},{"location":"getting_access/eessi_limactl/#installation-of-lima","title":"Installation of Lima","text":"

See Lima documentation: https://lima-vm.io/docs/installation/

brew install lima\n
"},{"location":"getting_access/eessi_limactl/#installing-eessi-in-limactl-with-eessi-template","title":"Installing EESSI in limactl with EESSI template","text":""},{"location":"getting_access/eessi_limactl/#example-eessiyaml-file","title":"Example eessi.yaml file","text":"

Use the EESSI template to install a virtual machine with eessi installed. Create a eessi.yaml file

Install a virtual machine with a Debian imageInstall a virtual machine with an Ubuntu imageInstall a virtual machine with a Rocky 9 image
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n# Try to use release-yyyyMMdd image if available. Note that release-yyyyMMdd will be removed after several months.\n- location: \"https://cloud.debian.org/images/cloud/bookworm/20240429-1732/debian-12-genericcloud-amd64-20240429-1732.qcow2\"\n  arch: \"x86_64\"\n  digest: \"sha512:6cc752d71b390c7fea64b0b598225914a7f4adacd4a33fa366187fac01094648628e0681a109ae9320b9a79aba2832f33395fa13154dad636465b7d9cdbed599\"\n- location: \"https://cloud.debian.org/images/cloud/bookworm/20240429-1732/debian-12-genericcloud-arm64-20240429-1732.qcow2\"\n  arch: \"aarch64\"\n  digest: \"sha512:59afc40ad0062ca100c9280a281256487348c8aa23b3e70c329a6d6f29b5343b628622e63e0b9b4fc3987dd691d5f3c657233186b3271878d5e0aa0b4d264b06\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2\"\n  arch: \"x86_64\"\n- location: \"https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-arm64.qcow2\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    wget -P /tmp https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\n    sudo dpkg -i /tmp/cvmfs-release-latest_all.deb\n    rm -f /tmp/cvmfs-release-latest_all.deb\n    sudo apt-get update\n    sudo apt-get install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n  hint: See \"/var/log/cloud-init-output.log\" in the guest\n
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n# Try to use release-yyyyMMdd image if available. Note that release-yyyyMMdd will be removed after several months.\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release-20240514/ubuntu-22.04-server-cloudimg-amd64.img\"\n  arch: \"x86_64\"\n  digest: \"sha256:1718f177dde4c461148ab7dcbdcf2f410c1f5daa694567f6a8bbb239d864b525\"\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release-20240514/ubuntu-22.04-server-cloudimg-arm64.img\"\n  arch: \"aarch64\"\n  digest: \"sha256:f6bf7305207a2adb9a2e2f701dc71f5747e5ba88f7b67cdb44b3f5fa6eea94a3\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img\"\n  arch: \"x86_64\"\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    wget -P /tmp https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\n    sudo dpkg -i /tmp/cvmfs-release-latest_all.deb\n    rm -f /tmp/cvmfs-release-latest_all.deb\n    sudo apt-get update\n    sudo apt-get install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n   hint: See \"/var/log/cloud-init-output.log\" in the guest\n
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n- location: \"https://dl.rockylinux.org/pub/rocky/9.3/images/x86_64/Rocky-9-GenericCloud-Base-9.3-20231113.0.x86_64.qcow2\"\n  arch: \"x86_64\"\n  digest: \"sha256:7713278c37f29b0341b0a841ca3ec5c3724df86b4d97e7ee4a2a85def9b2e651\"\n- location: \"https://dl.rockylinux.org/pub/rocky/9.3/images/aarch64/Rocky-9-GenericCloud-Base-9.3-20231113.0.aarch64.qcow2\"\n  arch: \"aarch64\"\n  digest: \"sha256:1948a5e00786dbf3230335339cf96491659e17444f5d00dabac0f095a7354cc1\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://dl.rockylinux.org/pub/rocky/9/images/x86_64/Rocky-9-GenericCloud.latest.x86_64.qcow2\"\n  arch: \"x86_64\"\n- location: \"https://dl.rockylinux.org/pub/rocky/9/images/aarch64/Rocky-9-GenericCloud.latest.aarch64.qcow2\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    sudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\n    sudo yum install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n  hint: See \"/var/log/cloud-init-output.log\" in the guest\n
"},{"location":"getting_access/eessi_limactl/#create-the-virtual-machine-with-the-eessiyaml-file","title":"Create the virtual machine with the eessi.yaml file","text":"
limactl create --name eessi ./eessi.yaml\n
"},{"location":"getting_access/eessi_limactl/#start-and-enter-the-virtual-machine","title":"Start and enter the virtual machine","text":"
limactl start eessi\nlimactl shell eessi\n

EESSI should now be available in the virtual machine

user@machine:/Users/user$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n  Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\n  archdetect says x86_64/intel/haswell\n  Using x86_64/intel/haswell as software subdirectory.\n  Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\n  Found Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/SitePackage.lua\n  Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\n  Using /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all as the site extension directory to be added to MODULEPATH.\n  Initializing Lmod...\n  Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Prepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Environment set up to use EESSI (2023.06), have fun!\n
"},{"location":"getting_access/eessi_limactl/#cleanup-virtual-machine","title":"Cleanup virtual machine","text":"
limactl stop eessi\nlimactl delete eessi\nlimactl prune\n
"},{"location":"getting_access/eessi_limactl/#advanced-set-resources-for-new-virtual-machine","title":"Advanced: Set resources for new virtual machine","text":"
# Set resources\nRATIO_RAM=0.5\nRAM=$(numfmt --to=none --to-unit=1073741824 --format=%.0f  $(echo $(sysctl hw.memsize_usable | awk '{print $2}' ) \"*$RATIO_RAM\" | bc -l))\nCPUS=$(sysctl hw.physicalcpu | awk '{print $2}')\n# Create VM\nlimactl create --cpus $CPUS --memory $RAM --name eessi ./eessi.yaml\nlimactl list\n
"},{"location":"getting_access/eessi_wsl/","title":"Installing EESSI with Windows Subsystem for Linux","text":""},{"location":"getting_access/eessi_wsl/#basic-commands-with-wsl","title":"Basic commands with WSL","text":""},{"location":"getting_access/eessi_wsl/#list-the-available-linux-distributions-for-installation","title":"List the available linux distributions for installation","text":"
C:/users/user>wsl --list --online\nThe following is a list of valid distributions that can be installed.\nInstall using 'wsl.exe --install <Distro>'.\n\nNAME                                   FRIENDLY NAME\nUbuntu                                 Ubuntu\nDebian                                 Debian GNU/Linux\nkali-linux                             Kali Linux Rolling\nUbuntu-18.04                           Ubuntu 18.04 LTS\nUbuntu-20.04                           Ubuntu 20.04 LTS\nUbuntu-22.04                           Ubuntu 22.04 LTS\nUbuntu-24.04                           Ubuntu 24.04 LTS\nOracleLinux_7_9                        Oracle Linux 7.9\nOracleLinux_8_7                        Oracle Linux 8.7\nOracleLinux_9_1                        Oracle Linux 9.1\nopenSUSE-Leap-15.5                     openSUSE Leap 15.5\nSUSE-Linux-Enterprise-Server-15-SP4    SUSE Linux Enterprise Server 15 SP4\nSUSE-Linux-Enterprise-15-SP5           SUSE Linux Enterprise 15 SP5\nopenSUSE-Tumbleweed                    openSUSE Tumbleweed\n
"},{"location":"getting_access/eessi_wsl/#list-the-installed-machines","title":"List the installed machines","text":"
C:/users/user>wsl --list --verbose\n  NAME      STATE           VERSION\n* Debian    Stopped         2\n
"},{"location":"getting_access/eessi_wsl/#reconnecting-to-a-virtual-machine-with-wsl","title":"Reconnecting to a Virtual machine with wsl","text":"
C:/users/user>wsl --distribution Debian\nuser@id:~$\n

For more documentation on using WSL you can check out the following pages:

"},{"location":"getting_access/eessi_wsl/#installing-a-linux-distribution-with-wsl","title":"Installing a linux distribution with WSL","text":"
C:/users/user>wsl --install --distribution Debian\nDebian GNU/Linux is already installed.\nLaunching Debian GNU/Linux...\nInstalling, this may take a few minutes...\nPlease create a default UNIX user account. The username does not need to match your Windows username.\nFor more information visit: https://aka.ms/wslusers\nEnter new UNIX username: user\nNew password:\nRetype new password:\npasswd: password updated successfully\nInstallation successful!\n
"},{"location":"getting_access/eessi_wsl/#installing-eessi-in-the-virtual-machine","title":"Installing EESSI in the virtual machine","text":"
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nsudo apt-get install wget\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
"},{"location":"getting_access/eessi_wsl/#start-cernvm-fs-in-windows-subsystem-for-linux","title":"Start cernVM-FS in Windows Subsystem for Linux","text":"

When the virtual machine is restarted CernVM-FS needs to be remounted with following command.

# start CernVM-FS on WSL\nsudo cvmfs_config wsl2_start\n

If you do not wish to do this you can set up the automounter. Examples are available here.

EESSI should now be available in the virtual machine

user@id:~$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n  Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\n  archdetect says x86_64/intel/haswell\n  Using x86_64/intel/haswell as software subdirectory.\n  Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\n  Found Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/SitePackage.lua\n  Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\n  Using /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all as the site extension directory to be added to MODULEPATH.\n  Initializing Lmod...\n  Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Prepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Environment set up to use EESSI (2023.06), have fun!\n
"},{"location":"getting_access/eessi_wsl/#cleanup-of-the-virtual-machine","title":"Cleanup of the virtual machine","text":"
C:/users/user>wsl --terminate Debian\nC:/users/user>wsl --unregister Debian\n
"},{"location":"getting_access/is_eessi_accessible/","title":"Is EESSI accessible?","text":"

EESSI can be accessed via a native (CernVM-FS) installation, or via a container that includes CernVM-FS.

Before you look into these options, check if EESSI is already accessible on your system.

Run the following command:

ls /cvmfs/software.eessi.io\n

Note

This ls command may take a couple of seconds to finish, since CernVM-FS may need to download or update the metadata for that directory.

If you see output like shown below, you already have access to EESSI on your system.

host_injections  latest  versions\n

For starting to use EESSI, continue reading about Setting up environment.

If you see an error message as shown below, EESSI is not yet accessible on your system.

ls: /cvmfs/software.eessi.io: No such file or directory\n
No worries, you don't need to be a to get access to EESSI.

Continue reading about the Native installation of EESSI, or access via the EESSI container.

"},{"location":"getting_access/native_installation/","title":"Native installation","text":""},{"location":"getting_access/native_installation/#installation-for-single-clients","title":"Installation for single clients","text":"

Setting up native access to EESSI, that is a system-wide deployment that does not require workarounds like using a container, requires the installation and configuration of CernVM-FS.

This requires admin privileges, since you need to install CernVM-FS as an OS package.

The following actions must be taken for a (basic) native installation of EESSI:

The good news is that all of this only requires a handful commands :

RHEL-based Linux distributionsDebian-based Linux distributions
# Installation commands for RHEL-based distros like CentOS, Rocky Linux, Almalinux, Fedora, ...\n\n# install CernVM-FS\nsudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\nsudo yum install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nsudo yum install -y https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi-latest.noarch.rpm\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n

Note

The default location for the cache directory is /var/lib/cvmfs. Please, check that the partition on which this directory is stored is big enough to store the cache (and other data). You may override this by adding CVMFS_CACHE_BASE=<some other directory for the cache> to your default.local, e.g., running

sudo bash -c \"echo 'CVMFS_CACHE_BASE=<some other directory for the cache>' >> /etc/cvmfs/default.local\"\n

"},{"location":"getting_access/native_installation/#installation-for-larger-systems-eg-clusters","title":"Installation for larger systems (e.g. clusters)","text":"

When using CernVM-FS on a larger number of local clients, e.g. on a HPC cluster or set of workstations, it is very strongly recommended to at least set up some Squid proxies close to your clients. These Squid proxies will be used to cache content that was recently accessed by your clients, which reduces the load on the Stratum 1 servers and reduces the latency for your clients. As a rule of thumb, you should use about one proxy per 500 clients, and have a minimum of two. Instructions for setting up a Squid proxy can be found in the CernVM-FS documentation and in the CernVM-FS tutorial.

Additionally, setting up a private Stratum 1, which will make a full copy of the repository, can be beneficial to improve the latency and bandwidth even further, and to be better protected against network outages. Instructions for setting up your own EESSI Stratum 1 can be found in setting up your own CernVM-FS Stratum 1 mirror server.

"},{"location":"getting_access/native_installation/#configuring-your-client-to-use-a-squid-proxy","title":"Configuring your client to use a Squid proxy","text":"

If you have set up one or more Squid proxies, you will have to add them to your CernVM-FS client configuration. This can be done by removing CVMFS_CLIENT_PROFILE=\"single\" from /etc/cvmfs/default.local, and add the following line:

CVMFS_HTTP_PROXY=\"http://ip-of-your-1st-proxy:port|http://ip-of-your-2nd-proxy:port\"\n

In this case, both proxies are equally preferable. More advanced use cases can be found in the CernVM-FS documentation.

"},{"location":"getting_access/native_installation/#configuring-your-client-to-use-a-private-stratum-1-mirror-server","title":"Configuring your client to use a private Stratum 1 mirror server","text":"

If you have set up your own Stratum 1 mirror server that replicates the EESSI CernVM-FS repositories, you can instruct your CernVM-FS client(s) to use it by prepending your newly created Stratum 1 to the existing list of EESSI Stratum 1 servers by creating a local CVMFS configuration file for the EESSI domain:

echo 'CVMFS_SERVER_URL=\"http://<url-or-ip-to-your-stratum1>/cvmfs/@fqrn@;$CVMFS_SERVER_URL\"' | sudo tee -a /etc/cvmfs/domain.d/eessi.io.local\n

It is also strongly recommended to disable the GEO API when using a private Stratum 1, because you want your private Stratum 1 to be picked first anyway. In order to do this, add the following to /etc/cvmfs/domain.d/eessi.io.local:

CVMFS_USE_GEOAPI=no\n

Note

By prepending your new Stratum 1 to the list of existing Stratum 1 servers and disabling the GEO API, your clients should by default use the private Stratum 1. In case of downtime of your private Stratum 1, they will also still be able to make use of the public EESSI Stratum 1 servers.

"},{"location":"getting_access/native_installation/#applying-changes-in-the-cernvm-fs-client-configuration-files","title":"Applying changes in the CernVM-FS client configuration files","text":"

After you have made any changes to the CernVM-FS client configuration, you will have to apply them. If this is the first time you set up the client, you can simply run:

sudo cvmfs_config setup\n

If you already had configured the client before, you can reload the configuration for the EESSI repository (or, similarly, for any other repository) using:

sudo cvmfs_config reload -c software.eessi.io\n
"},{"location":"known_issues/eessi-2023.06/","title":"Known issues","text":""},{"location":"known_issues/eessi-2023.06/#eessi-production-repository-v202306","title":"EESSI Production Repository (v2023.06)","text":""},{"location":"known_issues/eessi-2023.06/#failed-to-modify-ud-qp-to-init-on-mlx5_0-operation-not-permitted","title":"Failed to modify UD QP to INIT on mlx5_0: Operation not permitted","text":"

This is an error that occurs with OpenMPI after updating to OFED 23.10.

There is an upstream issue on this problem opened with EasyBuild. See: https://github.com/easybuilders/easybuild-easyconfigs/issues/20233

Workarounds

You can instruct OpenMPI to not use libfabric and turn off `uct`(see https://openucx.readthedocs.io/en/master/running.html#running-mpi) by passing the following options to `mpirun`:

mpirun -mca pml ucx -mca btl '^uct,ofi' -mca mtl '^ofi'\n
Or equivalently, you can set the following environment variables:
export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
You may also set these additional environment variables via site-specific Lmod hooks:
require(\"strict\")\nlocal hook=require(\"Hook\")\n\n-- Fix Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\nfunction fix_ud_qp_init_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('OMPI_MCA_btl', '^uct,ofi')\n        setenv('OMPI_MCA_pml', 'ucx')\n        setenv('OMPI_MCA_mtl', '^ofi')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    fix_ud_qp_init_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n
For more information about how to write and implement site-specific Lmod hooks, please check [EESSI Site Specific Configuration LMOD Hooks](site_specific_config/lmod_hooks.md)"},{"location":"known_issues/eessi-2023.06/#gcc-1220-and-foss-2022b-based-modules-cannot-be-loaded-on-zen4-architecture","title":"GCC-12.2.0 and foss-2022b based modules cannot be loaded on zen4 architecture","text":"

The zen4 architecture was released late 2022. As a result, the compilers and BLAS libraries that are part of the 2022b toolchain generation did not yet (fully) support this architecture. Concretely, it was found in this pr that unit tests in the OpenBLAS version that is part of the foss-2022b toolchain were failing. As a result, it was decided that we would not support this toolchain-generation at all on the zen4 architecture.

"},{"location":"meetings/2022-09-amsterdam/","title":"EESSI Community Meeting (Sept'22, Amsterdam)","text":""},{"location":"meetings/2022-09-amsterdam/#practical-info","title":"Practical info","text":""},{"location":"meetings/2022-09-amsterdam/#agenda","title":"Agenda","text":"

(subject to changes)

We envision a mix of presentations, experience reports, demos, and hands-on sessions and/or hackathons related to the EESSI project.

If you would like to give a talk or host a session, please let us know via the EESSI Slack!

"},{"location":"meetings/2022-09-amsterdam/#wed-14-sept-2022","title":"Wed 14 Sept 2022","text":""},{"location":"meetings/2022-09-amsterdam/#thu-15-sept-2022","title":"Thu 15 Sept 2022","text":""},{"location":"meetings/2022-09-amsterdam/#fri-16-sept-2022","title":"Fri 16 Sept 2022","text":""},{"location":"repositories/dev.eessi.io/","title":"Development repository (dev.eessi.io)","text":""},{"location":"repositories/dev.eessi.io/#what-is-deveessiio","title":"What is dev.eessi.io?","text":"

dev.eessi.io is the development repository of EESSI. With it, developers can deploy pre-release builds of their software to EESSI. This way, development versions of software can easily be tested on systems where the dev.eessi.io CernVM-FS repository is available.

On a system with dev.eessi.io mounted access is possible with module use /cvmfs/dev.eessi.io/versions/2023.06/modules/all. Then, all that is left is try out the development software!

"},{"location":"repositories/dev.eessi.io/#question-or-problems","title":"Question or problems","text":"

If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request. If you experience issues with the development repository, feel free to use the #dev.eessi.io channel of the EESSI Slack.

"},{"location":"repositories/dev.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.

"},{"location":"repositories/pilot/","title":"Pilot","text":""},{"location":"repositories/pilot/#pilot-software-stack-202112","title":"Pilot software stack (2021.12)","text":""},{"location":"repositories/pilot/#caveats","title":"Caveats","text":"

Danger

The EESSI pilot repository is no longer actively maintained, and should not be used for production work.

Please use the software.eessi.io repository instead.

The current EESSI pilot software stack (version 2021.12) is the 7th iteration, and there are some known issues and limitations, please take these into account:

Do not use it for production work, and be careful when testing it on production systems!

"},{"location":"repositories/pilot/#reporting-problems","title":"Reporting problems","text":"

If you notice any problems, please report them via https://github.com/EESSI/software-layer/issues.

"},{"location":"repositories/pilot/#accessing-the-eessi-pilot-repository-through-singularity","title":"Accessing the EESSI pilot repository through Singularity","text":"

The easiest way to access the EESSI pilot repository is by using Singularity. If Singularity is installed already, no admin privileges are required. No other software is needed either on the host.

A container image is available in the GitHub Container Registry (see https://github.com/EESSI/filesystem-layer/pkgs/container/client-pilot). It only contains a minimal operating system + the necessary packages to access the EESSI pilot repository through CernVM-FS, and it is suitable for aarch64, ppc64le, and x86_64.

The container image can be used directly by Singularity (no prior download required), as follows:

To verify that things are working, check the contents of the /cvmfs/pilot.eessi-hpc.org/versions/2021.12 directory:

Singularity> ls /cvmfs/pilot.eessi-hpc.org/versions/2021.12\ncompat  init  software\n

"},{"location":"repositories/pilot/#standard-installation","title":"Standard installation","text":"

For those with privileges on their system, there are a number of example installation scripts for different architectures and operating systems available in the EESSI demo repository.

Here we prefer the Singularity approach as we can guarantee that the container image is up to date.

"},{"location":"repositories/pilot/#setting-up-the-eessi-environment","title":"Setting up the EESSI environment","text":"

Once you have the EESSI pilot repository mounted, you can set up the environment by sourcing the provided init script:

source /cvmfs/pilot.eessi-hpc.org/versions/2021.12/init/bash\n

If all goes well, you should see output like this:

Found EESSI pilot repo @ /cvmfs/pilot.eessi-hpc.org/versions/2021.12!\nUsing x86_64/intel/haswell as software subdirectory.\nUsing /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI pilot software stack, have fun!\n[EESSI pilot 2021.12] $ \n

Now you're all set up! Go ahead and explore the software stack using \"module avail\", and go wild with testing the available software installations!

"},{"location":"repositories/pilot/#testing-the-eessi-pilot-software-stack","title":"Testing the EESSI pilot software stack","text":"

Please test the EESSI pilot software stack as you see fit: running simple commands, performing small calculations or running small benchmarks, etc.

Test scripts that have been verified to work correctly using the pilot software stack are available at https://github.com/EESSI/software-layer/tree/main/tests .

"},{"location":"repositories/pilot/#giving-feedback-or-reporting-problems","title":"Giving feedback or reporting problems","text":"

Any feedback is welcome, and questions or problems reports are welcome as well, through one of the EESSI communication channels:

"},{"location":"repositories/pilot/#available-software","title":"Available software","text":"

(last update: Mar 21st 2022)

EESSI currently supports the following HPC applications as well as all their dependencies:

[EESSI pilot 2021.12] $ module --nx avail\n\n--------------------------- /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all ----------------------------\n   ant/1.10.8-Java-11                                              LMDB/0.9.24-GCCcore-9.3.0\n   Arrow/0.17.1-foss-2020a-Python-3.8.2                            lz4/1.9.2-GCCcore-9.3.0\n   Bazel/3.6.0-GCCcore-9.3.0                                       Mako/1.1.2-GCCcore-9.3.0\n   Bison/3.5.3-GCCcore-9.3.0                                       MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n   Boost/1.72.0-gompi-2020a                                        matplotlib/3.2.1-foss-2020a-Python-3.8.2\n   cairo/1.16.0-GCCcore-9.3.0                                      Mesa/20.0.2-GCCcore-9.3.0\n   CGAL/4.14.3-gompi-2020a-Python-3.8.2                            Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2\n   CMake/3.16.4-GCCcore-9.3.0                                      METIS/5.1.0-GCCcore-9.3.0\n   CMake/3.20.1-GCCcore-10.3.0                                     MPFR/4.0.2-GCCcore-9.3.0\n   code-server/3.7.3                                               NASM/2.14.02-GCCcore-9.3.0\n   DB/18.1.32-GCCcore-9.3.0                                        ncdf4/1.17-foss-2020a-R-4.0.0\n   DB/18.1.40-GCCcore-10.3.0                                       netCDF-Fortran/4.5.2-gompi-2020a\n   double-conversion/3.1.5-GCCcore-9.3.0                           netCDF/4.7.4-gompi-2020a\n   Doxygen/1.8.17-GCCcore-9.3.0                                    nettle/3.6-GCCcore-9.3.0\n   EasyBuild/4.5.0                                                 networkx/2.4-foss-2020a-Python-3.8.2\n   EasyBuild/4.5.1                                         (D)     Ninja/1.10.0-GCCcore-9.3.0\n   Eigen/3.3.7-GCCcore-9.3.0                                       NLopt/2.6.1-GCCcore-9.3.0\n   Eigen/3.3.9-GCCcore-10.3.0                                      NSPR/4.25-GCCcore-9.3.0\n   ELPA/2019.11.001-foss-2020a                                     NSS/3.51-GCCcore-9.3.0\n   expat/2.2.9-GCCcore-9.3.0                                       nsync/1.24.0-GCCcore-9.3.0\n   expat/2.2.9-GCCcore-10.3.0                                      numactl/2.0.13-GCCcore-9.3.0\n   FFmpeg/4.2.2-GCCcore-9.3.0                                      numactl/2.0.14-GCCcore-10.3.0\n   FFTW/3.3.8-gompi-2020a                                          OpenBLAS/0.3.9-GCC-9.3.0\n   FFTW/3.3.9-gompi-2021a                                          OpenBLAS/0.3.15-GCC-10.3.0\n   flatbuffers/1.12.0-GCCcore-9.3.0                                OpenFOAM/v2006-foss-2020a\n   FlexiBLAS/3.0.4-GCC-10.3.0                                      OpenFOAM/8-foss-2020a                              (D)\n   fontconfig/2.13.92-GCCcore-9.3.0                                OpenMPI/4.0.3-GCC-9.3.0\n   foss/2020a                                                      OpenMPI/4.1.1-GCC-10.3.0\n   foss/2021a                                                      OpenPGM/5.2.122-GCCcore-9.3.0\n   freetype/2.10.1-GCCcore-9.3.0                                   OpenSSL/1.1                                        (D)\n   FriBidi/1.0.9-GCCcore-9.3.0                                     OSU-Micro-Benchmarks/5.6.3-gompi-2020a\n   GCC/9.3.0                                                       Pango/1.44.7-GCCcore-9.3.0\n   GCC/10.3.0                                                      ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi\n   GCCcore/9.3.0                                                   PCRE/8.44-GCCcore-9.3.0\n   GCCcore/10.3.0                                                  PCRE2/10.34-GCCcore-9.3.0\n   Ghostscript/9.52-GCCcore-9.3.0                                  Perl/5.30.2-GCCcore-9.3.0\n   giflib/5.2.1-GCCcore-9.3.0                                      Perl/5.32.1-GCCcore-10.3.0\n   git/2.23.0-GCCcore-9.3.0-nodocs                                 pixman/0.38.4-GCCcore-9.3.0\n   git/2.32.0-GCCcore-10.3.0-nodocs                        (D)     pkg-config/0.29.2-GCCcore-9.3.0\n   GLib/2.64.1-GCCcore-9.3.0                                       pkg-config/0.29.2-GCCcore-10.3.0\n   GLPK/4.65-GCCcore-9.3.0                                         pkg-config/0.29.2                                  (D)\n   GMP/6.2.0-GCCcore-9.3.0                                         pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2\n   GMP/6.2.1-GCCcore-10.3.0                                        PMIx/3.1.5-GCCcore-9.3.0\n   gnuplot/5.2.8-GCCcore-9.3.0                                     PMIx/3.2.3-GCCcore-10.3.0\n   GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2         poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2\n   gompi/2020a                                                     protobuf-python/3.13.0-foss-2020a-Python-3.8.2\n   gompi/2021a                                                     protobuf/3.13.0-GCCcore-9.3.0\n   groff/1.22.4-GCCcore-9.3.0                                      pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2\n   groff/1.22.4-GCCcore-10.3.0                                     pybind11/2.6.2-GCCcore-10.3.0\n   GROMACS/2020.1-foss-2020a-Python-3.8.2                          Python/2.7.18-GCCcore-9.3.0\n   GROMACS/2020.4-foss-2020a-Python-3.8.2                  (D)     Python/3.8.2-GCCcore-9.3.0\n   GSL/2.6-GCC-9.3.0                                               Python/3.9.5-GCCcore-10.3.0-bare\n   gzip/1.10-GCCcore-9.3.0                                         Python/3.9.5-GCCcore-10.3.0\n   h5py/2.10.0-foss-2020a-Python-3.8.2                             PyYAML/5.3-GCCcore-9.3.0\n   HarfBuzz/2.6.4-GCCcore-9.3.0                                    Qt5/5.14.1-GCCcore-9.3.0\n   HDF5/1.10.6-gompi-2020a                                         QuantumESPRESSO/6.6-foss-2020a\n   Horovod/0.21.3-foss-2020a-TensorFlow-2.3.1-Python-3.8.2         R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n   hwloc/2.2.0-GCCcore-9.3.0                                       R/4.0.0-foss-2020a\n   hwloc/2.4.1-GCCcore-10.3.0                                      re2c/1.3-GCCcore-9.3.0\n   hypothesis/6.13.1-GCCcore-10.3.0                                RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n   ICU/66.1-GCCcore-9.3.0                                          Rust/1.52.1-GCCcore-10.3.0\n   ImageMagick/7.0.10-1-GCCcore-9.3.0                              ScaLAPACK/2.1.0-gompi-2020a\n   IPython/7.15.0-foss-2020a-Python-3.8.2                          ScaLAPACK/2.1.0-gompi-2021a-fb\n   JasPer/2.0.14-GCCcore-9.3.0                                     scikit-build/0.10.0-foss-2020a-Python-3.8.2\n   Java/11.0.2                                             (11)    SciPy-bundle/2020.03-foss-2020a-Python-3.8.2\n   jbigkit/2.1-GCCcore-9.3.0                                       SciPy-bundle/2021.05-foss-2021a\n   JsonCpp/1.9.4-GCCcore-9.3.0                                     SCOTCH/6.0.9-gompi-2020a\n   LAME/3.100-GCCcore-9.3.0                                        snappy/1.1.8-GCCcore-9.3.0\n   libarchive/3.5.1-GCCcore-10.3.0                                 Spark/3.1.1-foss-2020a-Python-3.8.2\n   libcerf/1.13-GCCcore-9.3.0                                      SQLite/3.31.1-GCCcore-9.3.0\n   libdrm/2.4.100-GCCcore-9.3.0                                    SQLite/3.35.4-GCCcore-10.3.0\n   libevent/2.1.11-GCCcore-9.3.0                                   SWIG/4.0.1-GCCcore-9.3.0\n   libevent/2.1.12-GCCcore-10.3.0                                  Szip/2.1.1-GCCcore-9.3.0\n   libfabric/1.11.0-GCCcore-9.3.0                                  Tcl/8.6.10-GCCcore-9.3.0\n   libfabric/1.12.1-GCCcore-10.3.0                                 Tcl/8.6.11-GCCcore-10.3.0\n   libffi/3.3-GCCcore-9.3.0                                        tcsh/6.22.02-GCCcore-9.3.0\n   libffi/3.3-GCCcore-10.3.0                                       TensorFlow/2.3.1-foss-2020a-Python-3.8.2\n   libgd/2.3.0-GCCcore-9.3.0                                       time/1.9-GCCcore-9.3.0\n   libGLU/9.0.1-GCCcore-9.3.0                                      Tk/8.6.10-GCCcore-9.3.0\n   libglvnd/1.2.0-GCCcore-9.3.0                                    Tkinter/3.8.2-GCCcore-9.3.0\n   libiconv/1.16-GCCcore-9.3.0                                     UCX/1.8.0-GCCcore-9.3.0\n   libjpeg-turbo/2.0.4-GCCcore-9.3.0                               UCX/1.10.0-GCCcore-10.3.0\n   libpciaccess/0.16-GCCcore-9.3.0                                 UDUNITS/2.2.26-foss-2020a\n   libpciaccess/0.16-GCCcore-10.3.0                                UnZip/6.0-GCCcore-9.3.0\n   libpng/1.6.37-GCCcore-9.3.0                                     UnZip/6.0-GCCcore-10.3.0\n   libsndfile/1.0.28-GCCcore-9.3.0                                 WRF/3.9.1.1-foss-2020a-dmpar\n   libsodium/1.0.18-GCCcore-9.3.0                                  X11/20200222-GCCcore-9.3.0\n   LibTIFF/4.1.0-GCCcore-9.3.0                                     x264/20191217-GCCcore-9.3.0\n   libtirpc/1.2.6-GCCcore-9.3.0                                    x265/3.3-GCCcore-9.3.0\n   libunwind/1.3.1-GCCcore-9.3.0                                   xorg-macros/1.19.2-GCCcore-9.3.0\n   libxc/4.3.4-GCC-9.3.0                                           xorg-macros/1.19.3-GCCcore-10.3.0\n   libxml2/2.9.10-GCCcore-9.3.0                                    Xvfb/1.20.9-GCCcore-9.3.0\n   libxml2/2.9.10-GCCcore-10.3.0                                   Yasm/1.3.0-GCCcore-9.3.0\n   libyaml/0.2.2-GCCcore-9.3.0                                     ZeroMQ/4.3.2-GCCcore-9.3.0\n   LittleCMS/2.9-GCCcore-9.3.0                                     Zip/3.0-GCCcore-9.3.0\n   LLVM/9.0.1-GCCcore-9.3.0                                        zstd/1.4.4-GCCcore-9.3.0\n
"},{"location":"repositories/pilot/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":""},{"location":"repositories/pilot/#x86_64","title":"x86_64","text":""},{"location":"repositories/pilot/#aarch64arm64","title":"aarch64/arm64","text":""},{"location":"repositories/pilot/#ppc64le","title":"ppc64le","text":""},{"location":"repositories/pilot/#easybuild-configuration","title":"EasyBuild configuration","text":"

EasyBuild v4.5.1 was used to install the software in the 2021.12 version of the pilot repository. For some installations pull requests with changes that will be included in later EasyBuild versions were leveraged, see the build script that was used.

An example configuration of the build environment based on https://github.com/EESSI/software-layer can be seen here:

$ eb --show-config\n#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath         (E) = /tmp/eessi-build/easybuild/build\ncontainerpath     (E) = /tmp/eessi-build/easybuild/containers\ndebug             (E) = True\nfilter-deps       (E) = Autoconf, Automake, Autotools, binutils, bzip2, cURL, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars   (E) = LD_LIBRARY_PATH\nhooks             (E) = /home/eessi-build/software-layer/eb_hooks.py\nignore-osdeps     (E) = True\ninstallpath       (E) = /cvmfs/pilot.eessi-hpc.org/2021.06/software/linux/x86_64/intel/haswell\nmodule-extensions (E) = True\npackagepath       (E) = /tmp/eessi-build/easybuild/packages\nprefix            (E) = /tmp/eessi-build/easybuild\nrepositorypath    (E) = /tmp/eessi-build/easybuild/ebfiles_repo\nrobot-paths       (D) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/software/EasyBuild/4.5.1/easybuild/easyconfigs\nrpath             (E) = True\nsourcepath        (E) = /tmp/eessi-build/easybuild/sources:\nsysroot           (E) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/compat/linux/x86_64\ntrace             (E) = True\nzip-logs          (E) = bzip2\n

"},{"location":"repositories/pilot/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the pilot repository is shown at http://status.eessi.io/pilot/.

"},{"location":"repositories/riscv.eessi.io/","title":"EESSI RISC-V development repository (riscv.eessi.io)","text":"

This repository contains development versions of an EESSI RISC-V software stack. Note that versions may be added, modified, or deleted at any time.

"},{"location":"repositories/riscv.eessi.io/#accessing-the-risc-v-repository","title":"Accessing the RISC-V repository","text":"

See Getting access; by making the EESSI CVMFS domain available, you will automatically have access to riscv.eessi.io as well.

"},{"location":"repositories/riscv.eessi.io/#using-riscveessiio","title":"Using riscv.eessi.io","text":"

This repository currently offers one version (20240402), and this contains both a compatibility layer and a software layer. Furthermore, initialization scripts are in place to set up the repository:

$ source /cvmfs/riscv.eessi.io/versions/20240402/init/bash\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $\n

You can even source the initialization script of the software.eessi.io production repository now, and it will automatically set up the RISC-V repository for you:

$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash \nRISC-V architecture detected, but there is no RISC-V support yet in the production repository.\nAutomatically switching to version 20240402 of the RISC-V development repository /cvmfs/riscv.eessi.io.\nFor more details about this repository, see https://www.eessi.io/docs/repositories/riscv.eessi.io/.\n\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all as the site extension directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $ \n

Note that we currently only provide generic builds, hence riscv64/generic is being used for all RISC-V CPUs.

The amount of software is constantly increasing. Besides having the foss/2023b toolchain available, applications like dlb, GROMACS, OSU Micro-Benchmarks, and R are already available as well. Use module avail to get a full and up-to-date listing of available software.

"},{"location":"repositories/riscv.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for this repository is shown at https://status.eessi.io.

"},{"location":"repositories/software.eessi.io/","title":"Production EESSI repository (software.eessi.io)","text":""},{"location":"repositories/software.eessi.io/#question-or-problems","title":"Question or problems","text":"

If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request.

"},{"location":"repositories/software.eessi.io/#accessing-the-eessi-repository","title":"Accessing the EESSI repository","text":"

See Getting access.

"},{"location":"repositories/software.eessi.io/#using-softwareeessiio","title":"Using software.eessi.io","text":"

See Using EESSI.

"},{"location":"repositories/software.eessi.io/#available-software","title":"Available software","text":"

See Available software.

"},{"location":"repositories/software.eessi.io/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":"

See CPU targets.

"},{"location":"repositories/software.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.

"},{"location":"site_specific_config/gpu/","title":"GPU support","text":"

More information on the actions that must be performed to ensure that GPU software included in EESSI can use the GPU in your system is available below.

Please open a support issue if you need help or have questions regarding GPU support.

Make sure the ${EESSI_VERSION} version placeholder is defined!

In this page, we use ${EESSI_VERSION} as a placeholder for the version of the EESSI repository, for example:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}\n

Before inspecting paths, or executing any of the specified commands, you should define $EESSI_VERSION first, for example with:

export EESSI_VERSION=2023.06\n

"},{"location":"site_specific_config/gpu/#nvidia","title":"Support for using NVIDIA GPUs","text":"

EESSI supports running CUDA-enabled software. All CUDA-enabled modules are marked with the (gpu) feature, which is visible in the output produced by module avail.

"},{"location":"site_specific_config/gpu/#nvidia_drivers","title":"NVIDIA GPU drivers","text":"

For CUDA-enabled software to run, it needs to be able to find the NVIDIA GPU drivers of the host system. The challenge here is that the NVIDIA GPU drivers are not always in a standard system location, and that we can not install the GPU drivers in EESSI (since they are too closely tied to the client OS and GPU hardware).

"},{"location":"site_specific_config/gpu/#cuda_sdk","title":"Compiling CUDA software","text":"

An additional requirement is necessary if you want to be able to compile CUDA-enabled software using a CUDA installation included in EESSI. This requires a full CUDA SDK, but the CUDA SDK End User License Agreement (EULA) does not allow for full redistribution. In EESSI, we are (currently) only allowed to redistribute the files needed to run CUDA software.

Full CUDA SDK only needed to compile CUDA software

Without a full CUDA SDK on the host system, you will still be able to run CUDA-enabled software from the EESSI stack, you just won't be able to compile additional CUDA software.

Below, we describe how to make sure that the EESSI software stack can find your NVIDIA GPU drivers and (optionally) full installations of the CUDA SDK.

"},{"location":"site_specific_config/gpu/#driver_location","title":"Configuring CUDA driver location","text":"

All CUDA-enabled software in EESSI expects the CUDA drivers to be available in a specific subdirectory of this host_injections directory. In addition, installations of the CUDA SDK included EESSI are stripped down to the files that we are allowed to redistribute; all other files are replaced by symbolic links that point to another specific subdirectory of host_injections. For example:

$ ls -l /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\nlrwxrwxrwx 1 cvmfs cvmfs 109 Dec 21 14:49 /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc -> /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\n

If the corresponding full installation of the CUDA SDK is available there, the CUDA installation included in EESSI can be used to build CUDA software.

"},{"location":"site_specific_config/gpu/#nvidia_eessi_native","title":"Using NVIDIA GPUs via a native EESSI installation","text":"

Here, we describe the steps to enable GPU support when you have a native EESSI installation on your system.

Required permissions

To enable GPU support for EESSI on your system, you will typically need to have system administration rights, since you need write permissions on the folder to the target directory of the host_injections symlink.

"},{"location":"site_specific_config/gpu/#exposing-nvidia-gpu-drivers","title":"Exposing NVIDIA GPU drivers","text":"

To install the symlinks to your GPU drivers in host_injections, run the link_nvidia_host_libraries.sh script that is included in EESSI:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/link_nvidia_host_libraries.sh\n

This script uses ldconfig on your host system to locate your GPU drivers, and creates symbolic links to them in the correct location under host_injections directory. It also stores the CUDA version supported by the driver that the symlinks were created for.

Re-run link_nvidia_host_libraries.sh after NVIDIA GPU driver update

You should re-run this script every time you update the NVIDIA GPU drivers on the host system.

Note that it is safe to re-run the script even if no driver updates were done: the script should detect that the current version of the drivers were already symlinked.

"},{"location":"site_specific_config/gpu/#installing-full-cuda-sdk-optional","title":"Installing full CUDA SDK (optional)","text":"

To install a full CUDA SDK under host_injections, use the install_cuda_host_injections.sh script that is included in EESSI:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh\n

For example, to install CUDA 12.1.1 in the directory that the host_injections variant symlink points to, using /tmp/$USER/EESSI as directory to store temporary files:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --cuda-version 12.1.1 --temp-dir /tmp/$USER/EESSI --accept-cuda-eula\n
You should choose the CUDA version you wish to install according to what CUDA versions are included in EESSI; see the output of module avail CUDA/ after setting up your environment for using EESSI.

You can run /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --help to check all of the options.

Tip

This script uses EasyBuild to install the CUDA SDK. For this to work, two requirements need to be satisfied:

You can rely on the EasyBuild installation that is included in EESSI for this.

Alternatively, you may load an EasyBuild module manually before running the install_cuda_host_injections.sh script to make an eb command available.

"},{"location":"site_specific_config/gpu/#nvidia_eessi_container","title":"Using NVIDIA GPUs via EESSI in a container","text":"

We focus here on the Apptainer/Singularity use case, and have only tested the --nv option to enable access to GPUs from within the container.

If you are using the EESSI container to access the EESSI software, the procedure for enabling GPU support is slightly different and will be documented here eventually.

"},{"location":"site_specific_config/gpu/#exposing-nvidia-gpu-drivers_1","title":"Exposing NVIDIA GPU drivers","text":"

When running a container with apptainer or singularity it is not necessary to run the install_cuda_host_injections.sh script since both these tools use $LD_LIBRARY_PATH internally in order to make the host GPU drivers available in the container.

The only scenario where this would be required is if $LD_LIBRARY_PATH is modified or undefined.

"},{"location":"site_specific_config/gpu/#gpu_cuda_testing","title":"Testing the GPU support","text":"

The quickest way to test if software installations included in EESSI can access and use your GPU is to run the deviceQuery executable that is part of the CUDA-Samples module:

module load CUDA-Samples\ndeviceQuery\n
If both are successful, you should see information about your GPU printed to your terminal.

"},{"location":"site_specific_config/host_injections/","title":"How to configure EESSI","text":""},{"location":"site_specific_config/host_injections/#why-configuration-is-necessary","title":"Why configuration is necessary","text":"

Just installing EESSI is enough to get started with the EESSI software stack on a CPU-based system. However, additional configuration is necessary in many other cases, such as - enabling GPU support on GPU-based systems - site-specific configuration / tuning of the MPI libraries provided by EESSI - overriding EESSI's MPI library with an ABI compatible host MPI

"},{"location":"site_specific_config/host_injections/#the-host_injections-variant-symlink","title":"The host_injections variant symlink","text":"

To allow such site-specific configuration, the EESSI repository includes a special directory where system administrations can install files that can be picked up by the software installations included in EESSI. This special directory is located in /cvmfs/software.eessi.io/host_injections, and it is a CernVM-FS Variant Symlink: a symbolic link for which the target can be controlled by the CernVM-FS client configuration (for more info, see 'Variant Symlinks' in the official CernVM-FS documentation).

Default target for host_injections variant symlink

Unless otherwise configured in the CernVM-FS client configuration for the EESSI repository, the host_injections symlink points to /opt/eessi on the client system:

$ ls -l /cvmfs/software.eessi.io/host_injections\nlrwxrwxrwx 1 cvmfs cvmfs 10 Oct  3 13:51 /cvmfs/software.eessi.io/host_injections -> /opt/eessi\n

The target for this symlink can be controlled by setting the EESSI_HOST_INJECTIONS variable in your local CVMFS configuration for EESSI. E.g.

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/' > /etc/cvmfs/domain.d/eessi.io.local\"\n

Don't forget to reload the CernVM-FS configuration

After making a change to a CernVM-FS configuration file, you also need to reload the configuration:

sudo cvmfs_config reload\n

On a heterogeneous system, you may want to use different targets for the variant symlink for different node types. For example, you might have two types of GPU nodes (gpu1 and gpu2) for which the GPU drivers are not in the same location, or not of the same version. Since those are both things we configure under host_injections, you'll need separate host_injections directories for each node type. That can easily be achieved by putting e.g.

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/gpu1/' > /etc/cvmfs/domain.d/eessi.io.local\"\n

in the CVMFS config on the gpu1 nodes, and

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/gpu2/' > /etc/cvmfs/domain.d/eessi.io.local\"\n
in the CVMFS config on the gpu2 nodes.

"},{"location":"site_specific_config/lmod_hooks/","title":"Configuring site-specific Lmod hooks","text":"

You may want to customize what happens when certain modules are loaded, for example, you may want to set additional environment variables. This is possible with LMOD hooks. A typical example would be when you want to tune the OpenMPI module for your system by setting additional environment variables when an OpenMPI module is loaded.

"},{"location":"site_specific_config/lmod_hooks/#location-of-the-hooks","title":"Location of the hooks","text":"

The EESSI software stack provides its own set of hooks in $LMOD_PACKAGE_PATH/SitePackage.lua. This SitePackage.lua also searches for site-specific hooks in two additional locations:

The first allows for hooks that need to be executed for that system, irrespective of the CPU architecture. The second allows for hooks specific to a certain architecture.

"},{"location":"site_specific_config/lmod_hooks/#architecture-independent-hooks","title":"Architecture-independent hooks","text":"

Hooks are written in Lua and can use any of the standard Lmod functionality as described in the Lmod documentation. While there are many types of hooks, you most likely want to specify a load or unload hook. Note that the EESSI hooks provide a nice example of what you can do with hooks. Here, as an example, we will define a load hook that environment variable MY_ENV_VAR to 1 whenever an OpenMPI module is loaded.

First, you typically want to load the necessary Lua packages:

-- $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/.lmod/SitePackage.lua\n\n-- The Strict package checks for the use of undeclared variables:\nrequire(\"strict\")\n\n-- Load the Lmod Hook package\nlocal hook=require(\"Hook\")\n

Next, we define a function that we want to use as a hook. Unfortunately, registering multiple hooks of the same type (e.g. multiple load hooks) is only supported in Lmod 8.7.35+. EESSI version 2023.06 uses Lmod 8.7.30. Thus, we define our function without the local keyword, so that we can still add to it later in an architecture-specific hook (if we wanted to):

-- Define a function for the hook\n-- Note that we define this without 'local' keyword\n-- That way we can still add to this function in an architecture-specific hook\nfunction set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n

for the same reason that multiple hooks cannot be registered, we need to combine this function for our site-specific (architecture-independent) with the function that specifies the EESSI load hook. Note that all EESSI hooks will be called eessi_<hook_type>_hook by convention.

-- Registering multiple hook functions, e.g. multiple load hooks is only supported in Lmod 8.7.35+\n-- EESSI version 2023.06 uses lmod 8.7.30. Thus, we first have to combine all functions into a single one,\n-- before registering it as a hook\nlocal function combined_load_hook(t)\n    -- Call the EESSI load hook (if it exists)\n    -- Note that if you wanted to overwrite the EESSI hooks (not recommended!), you would omit this\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    -- Call the site-specific load hook\n    set_my_env_var_openmpi(t)\nend\n

Then, we can finally register this function as an Lmod hook:

hook.register(\"load\", combined_load_hook)\n

Thus, our complete $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/.lmod/SitePackage.lua now looks like this (omitting the comments):

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nfunction set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    set_my_env_var_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n

Note that for future EESSI versions, if they use Lmod 8.7.35+, this would be simplified to:

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n\nhook.register(\"load\", set_my_env_var_openmpi, \"append\")\n
"},{"location":"site_specific_config/lmod_hooks/#architecture-dependent-hooks","title":"Architecture-dependent hooks","text":"

Now, assume that in addition we want to set an environment variable MY_SECOND_ENV_VAR to 5, but only for nodes that have the zen3 architecture. First, again, you typically want to load the necessary Lua packages:

-- $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/software/linux/x86_64/amd/zen3/.lmod/SitePackage.lua\n\n-- The Strict package checks for the use of undeclared variables:\nrequire(\"strict\")\n\n-- Load the Lmod Hook package\nlocal hook=require(\"Hook\")\n

Next, we define the function for the hook itself

-- Define a function for the hook\n-- This time, we can define it as a local function, as there are no hooks more specific than this \nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n

Then, we combine the functions into one

local function combined_load_hook(t)\n    -- Call the EESSI load hook first\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    -- Then call the architecture-independent load hook\n    if set_my_env_var_openmpi(t) ~= nil then\n        set_my_env_var_openmpi(t)\n    end\n    -- And finally the architecture-dependent load hook we just defined\n    set_my_second_env_var_openmpi(t)\nend\n

before finally registering it as an Lmod hook

hook.register(\"load\", combined_load_hook)\n

Thus, our full $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/software/linux/x86_64/amd/zen3/.lmod/SitePackage.lua now looks like this (omitting the comments):

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    if set_my_env_var_openmpi(t) ~= nil then\n        set_my_env_var_openmpi(t)\n    end\n    set_my_second_env_var_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n

Again, note that for future EESSI versions, if they use Lmod 8.7.35+, this would simplify to

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n\nhook.register(\"load\", set_my_second_var_openmpi, \"append\")\n
"},{"location":"software_layer/build_nodes/","title":"Build nodes","text":"

Any system can be used as a build node to create additional software installations that should be added to the EESSI CernVM-FS repository.

"},{"location":"software_layer/build_nodes/#requirements","title":"Requirements","text":"

OS and software:

Admin privileges are not required, as long as Singularity is installed.

Resources:

Instructions to install Singularity and screen (click to show commands):

CentOS 8 (x86_64 or aarch64 or ppc64le)
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\nsudo dnf update -y\nsudo dnf install -y screen singularity\n
"},{"location":"software_layer/build_nodes/#setting-up-the-container","title":"Setting up the container","text":"

Warning

It is highly recommended to start a screen or tmux session first!

A container image is provided that includes everything that is required to set up a writable overlay on top of the EESSI CernVM-FS repository.

First, pick a location on a local filesystem for the temporary directory:

Requirements:

NB. If you are going to install on a separate drive (due to lack of space on /), then you need to set some variables to point to that location. You will also need to bind mount it in the singularity command. Let's say that you drive is mounted in /srt. Then you change the relevant commands below to this:

export EESSI_TMPDIR=/srt/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\nmkdir /srt/tmp\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs,/srt/tmp:/tmp\"\nsingularity shell -B /srt --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian11\n

We will assume that /tmp/$USER/EESSI meets these requirements:

export EESSI_TMPDIR=/tmp/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\n

Create some subdirectories in this temporary directory:

mkdir -p $EESSI_TMPDIR/{home,overlay-upper,overlay-work}\nmkdir -p $EESSI_TMPDIR/{var-lib-cvmfs,var-run-cvmfs}\n

Configure Singularity cache directory, bind mounts, and (fake) home directory:

export SINGULARITY_CACHEDIR=$EESSI_TMPDIR/singularity_cache\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"$EESSI_TMPDIR/home:/home/$USER\"\n

Define values to pass to --fusemount` insingularity`` command:

export EESSI_READONLY=\"container:cvmfs2 software.eessi.io /cvmfs_ro/software.eessi.io\"\nexport EESSI_WRITABLE_OVERLAY=\"container:fuse-overlayfs -o lowerdir=/cvmfs_ro/software.eessi.io -o upperdir=$EESSI_TMPDIR/overlay-upper -o workdir=$EESSI_TMPDIR/overlay-work /cvmfs/software.eessi.io\"\n

Start the container (which includes Debian 11, CernVM-FS and fuse-overlayfs):

singularity shell --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian10\n

Once the container image has been downloaded and converted to a Singularity image (SIF format), you should get a prompt like this:

...\nCernVM-FS: loading Fuse module... done\n\nSingularity>\n

and the EESSI CernVM-FS repository should be mounted:

Singularity> ls /cvmfs/software.eessi.io\nhost_injections  README.eessi  versions\n
"},{"location":"software_layer/build_nodes/#setting-up-the-environment","title":"Setting up the environment","text":"

Set up the environment by starting a Gentoo Prefix session using the startprefix command.

Make sure you use the correct version of the EESSI repository!

export EESSI_VERSION='2023.06' \n/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/compat/linux/$(uname -m)/startprefix\n
"},{"location":"software_layer/build_nodes/#installing-software","title":"Installing software","text":"

Clone the software-layer repository:

git clone https://github.com/EESSI/software-layer.git\n

Run the software installation script in software-layer:

cd software-layer\n./EESSI-install-software.sh\n

This script will figure out the CPU microarchitecture of the host automatically (like x86_64/intel/haswell).

To build generic software installations (like x86_64/generic), use the --generic option:

./EESSI-install-software.sh --generic\n

Once all missing software has been installed, you should see a message like this:

No missing modules!\n
"},{"location":"software_layer/build_nodes/#creating-tarball-to-ingest","title":"Creating tarball to ingest","text":"

Before tearing down the build node, you should create tarball to ingest into the EESSI CernVM-FS repository.

To create a tarball of all installations, assuming your build host is x86_64/intel/haswell:

export EESSI_VERSION='2023.06'\ncd /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/software/linux\neessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell.tar.gz\"\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell\n

To create a tarball for specific installations, make sure you pick up both the software installation directories and the corresponding module files:

eessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell-OpenFOAM.tar.gz\"\n\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell/software/OpenFOAM modules/all//OpenFOAM\n

This tarball should be uploaded to the Stratum 0 server for ingestion. If needed, you can ask for help in the EESSI #software-layer Slack channel

"},{"location":"software_layer/cpu_targets/","title":"CPU targets","text":"

In the 2023.06 version of the EESSI repository, the following CPU microarchitectures are supported.

The names of these CPU targets correspond to the names used by archspec.

"},{"location":"talks/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"

AWS HPC Tech Short (~8 min.) - 15 June 2023

"},{"location":"talks/2023/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"

AWS HPC Tech Short (~8 min.) - 15 June 2023

"},{"location":"talks/2023/20231027_packagingcon23_eessi/","title":"Streaming optimized scientific software installations on any Linux distro with EESSI","text":""},{"location":"talks/2023/20231204_cvmfs_hpc/","title":"Best Practices for CernVM-FS in HPC","text":""},{"location":"talks/2023/20231205_castiel2_eessi_intro/","title":"Streaming Optimised Scientific Software: an Introduction to EESSI","text":""},{"location":"test-suite/","title":"EESSI test suite","text":"

The EESSI test suite is a collection of tests that are run using ReFrame. It is used to check whether the software installations included in the EESSI software layer are working and performing as expected.

To get started, you should look into the installation and configuration guidelines first.

To write the ReFrame configuration file for your system, check ReFrame configuration file.

For which software tests are available, see available-tests.md.

For more information on using the EESSI test suite, see here.

See also release notes for the EESSI test suite.

"},{"location":"test-suite/ReFrame-configuration-file/","title":"ReFrame configuration file","text":"

In order for ReFrame to run tests on your system, it needs to know some properties about your system. For example, it needs to know what kind of job scheduler you have, which partitions the system has, how to submit to those partitions, etc. All of this has to be described in a ReFrame configuration file (see also the section on $RFM_CONFIG_FILES).

This page is organized as follows:

"},{"location":"test-suite/ReFrame-configuration-file/#available-reframe-configuration-files","title":"Available ReFrame configuration files","text":"

There are some available ReFrame configuration files for HPC systems and public cloud in the config directory for more inspiration. Below is a simple ReFrame configuration file with minimal changes required for getting you started on using the test suite for a CPU partition. Please check that stagedir is set to a path on a (shared) scratch filesystem for storing (temporary) files related to the tests, and access is set to a list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu' for submitting to a Slurm partition called cpu).

To write a ReFrame configuration file for your system, check the section How to write a ReFrame configuration file.

\"\"\"\nsimple ReFrame configuration file\n\"\"\"\nimport os\n\nfrom eessi.testsuite.common_config import common_logging_config, common_eessi_init, format_perfvars, perflog_format\nfrom eessi.testsuite.constants import *  \n\nsite_configuration = {\n    'systems': [\n        {\n            'name': 'cpu_partition',\n            'descr': 'CPU partition',\n            'modules_system': 'lmod',\n            'hostnames': ['*'],\n            # Note that the stagedir should be a shared directory available on all nodes running ReFrame tests\n            'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n            'partitions': [\n                {\n                    'name': 'cpu_partition',\n                    'descr': 'CPU partition',\n                    'scheduler': 'slurm',\n                    'launcher': 'mpirun',\n                    'access':  ['-p cpu', '--export=None'],\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'resources': [\n                        {\n                            'name': 'memory',\n                            'options': ['--mem={size}'],\n                        }\n                    ],\n                    'features': [\n                        FEATURES[CPU]\n                    ] + list(SCALES.keys()),\n                }\n            ]\n        },\n    ],\n    'environments': [\n        {\n            'name': 'default',\n            'cc': 'cc',\n            'cxx': '',\n            'ftn': '',\n        },\n    ],\n    'logging': common_logging_config(),\n    'general': [\n        {\n            # Enable automatic detection of CPU architecture for each partition\n            # See https://reframe-hpc.readthedocs.io/en/stable/configure.html#auto-detecting-processor-information\n            'remote_detect': True,\n        }\n    ],\n}\n\n# optional logging to syslog\nsite_configuration['logging'][0]['handlers_perflog'].append({\n    'type': 'syslog',\n    'address': '/dev/log',\n    'level': 'info',\n    'format': f'reframe: {perflog_format}',\n    'format_perfvars': format_perfvars,\n    'append': True,\n})\n
"},{"location":"test-suite/ReFrame-configuration-file/#verifying-your-reframe-configuration","title":"Verifying your ReFrame configuration","text":"

To verify the ReFrame configuration, you can query the configuration using --show-config.

To see the full configuration, use:

reframe --show-config\n

To only show the configuration of a particular system partition, you can use the --system option. To query a specific setting, you can pass an argument to --show-config.

For example, to show the configuration of the gpu partition of the example system:

reframe --system example:gpu --show-config systems/0/partitions\n

You can drill it down further to only show the value of a particular configuration setting.

For example, to only show the launcher value for the gpu partition of the example system:

reframe --system example:gpu --show-config systems/0/partitions/@gpu/launcher\n
"},{"location":"test-suite/ReFrame-configuration-file/#write-reframe-config","title":"How to write a ReFrame configuration file","text":"

The official ReFrame documentation provides the full description on configuring ReFrame for your site. However, there are some configuration settings that are specifically required for the EESSI test suite. Also, there are a large amount of configuration settings available in ReFrame, which makes the official documentation potentially a bit overwhelming.

Here, we will describe how to create a configuration file that works with the EESSI test suite, starting from an example configuration file settings_example.py, which defines the most common configuration settings.

"},{"location":"test-suite/ReFrame-configuration-file/#python-imports","title":"Python imports","text":"

The EESSI test suite standardizes a few string-based values as constants, as well as the logging format used by ReFrame. Every ReFrame configuration file used for running the EESSI test suite should therefore start with the following import statements:

from eessi.testsuite.common_config import common_logging_config, common_eessi_init\nfrom eessi.testsuite.constants import *\n
"},{"location":"test-suite/ReFrame-configuration-file/#high-level-system-info-systems","title":"High-level system info (systems)","text":"

First, we describe the system at its highest level through the systems keyword.

You can define multiple systems in a single configuration file (systems is a Python list value). We recommend defining just a single system in each configuration file, as it makes the configuration file a bit easier to digest (for humans).

An example of the systems section of the configuration file would be:

site_configuration = {\n    'systems': [\n    # We could list multiple systems. Here, we just define one\n        {\n            'name': 'example',\n            'descr': 'Example cluster',\n            'modules_system': 'lmod',\n            'hostnames': ['*'],\n            'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n            'partitions': [...],\n        }\n    ]\n}\n

The most common configuration items defined at this level are:

"},{"location":"test-suite/ReFrame-configuration-file/#partitions","title":"System partitions (systems.partitions)","text":"

The next step is to add the system partitions to the configuration files, which is also specified as a Python list since a system can have multiple partitions.

The partitions section of the configuration for a system with two Slurm partitions (one CPU partition, and one GPU partition) could for example look something like this:

site_configuration = {\n    'systems': [\n        {\n            ...\n            'partitions': [\n                {\n                    'name': 'cpu_partition',\n                    'descr': 'CPU partition'\n                    'scheduler': 'slurm',\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'launcher': 'mpirun',\n                    'access':  ['-p cpu'],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'features': [\n                        FEATURES[CPU]\n                    ] + list(SCALES.keys()),\n                },\n                {\n                    'name': 'gpu_partition',\n                    'descr': 'GPU partition'\n                    'scheduler': 'slurm',\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'launcher': 'mpirun',\n                    'access':  ['-p gpu'],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'resources': [\n                        {\n                            'name': '_rfm_gpu',\n                            'options': ['--gpus-per-node={num_gpus_per_node}'],\n                        }\n                    ],\n                    'devices': [\n                        {\n                            'type': DEVICE_TYPES[GPU],\n                            'num_devices': 4,\n                        }\n                    ],\n                    'features': [\n                        FEATURES[CPU],\n                        FEATURES[GPU],\n                    ],\n                    'extras': {\n                        GPU_VENDOR: GPU_VENDORS[NVIDIA],\n                    },\n                },\n            ]\n        }\n    ]\n}\n

The most common configuration items defined at this level are:

Note that as more tests are added to the EESSI test suite, the use of features, devices and extras by the EESSI test suite may be extended, which may require an update of your configuration file to define newly recognized fields.

Note

Keep in mind that ReFrame partitions are virtual entities: they may or may not correspond to a partition as it is configured in your batch system. One might for example have a single partition in the batch system, but configure it as two separate partitions in the ReFrame configuration file based on additional constraints that are passed to the scheduler, see for example the AWS CitC example configuration.

The EESSI test suite (and more generally, ReFrame) assumes the hardware within a partition defined in the ReFrame configuration file is homogeneous.

"},{"location":"test-suite/ReFrame-configuration-file/#environments","title":"Environments","text":"

ReFrame needs a programming environment to be defined in its configuration file for tests that need to be compiled before they are run. While we don't have such tests in the EESSI test suite, ReFrame requires some programming environment to be defined:

site_configuration = {\n    ...\n    'environments': [\n        {\n            'name': 'default',  # Note: needs to match whatever we set for 'environs' in the partition\n            'cc': 'cc',\n            'cxx': '',\n            'ftn': '',\n        }\n    ]\n}\n

Note

The name here needs to match whatever we specified for the environs property of the partitions.

"},{"location":"test-suite/ReFrame-configuration-file/#logging","title":"Logging","text":"

ReFrame allows a large degree of control over what gets logged, and where. For convenience, we have created a common logging configuration in eessi.testsuite.common_config that provides a reasonable default. It can be used by importing common_logging_config and calling it as a function to define the 'logging setting:

from eessi.testsuite.common_config import common_logging_config\n\nsite_configuration = {\n    ...\n    'logging':  common_logging_config(),\n}\n
When combined by setting the $RFM_PREFIX environment variable, the output, performance log, and regular ReFrame logs will all end up in the directory specified by $RFM_PREFIX, which we recommend doing.

Alternatively, a prefix can be passed as an argument like common_logging_config(prefix), which will control where the regular ReFrame log ends up. Note that the performance logs do not respect this prefix: they will still end up in the standard ReFrame prefix (by default the current directory, unless otherwise set with $RFM_PREFIX or --prefix).

"},{"location":"test-suite/ReFrame-configuration-file/#cpu-auto-detection","title":"Auto-detection of processor information","text":"

You can let ReFrame auto-detect the processor information for your system.

"},{"location":"test-suite/ReFrame-configuration-file/#creation-of-topology-file-by-reframe","title":"Creation of topology file by ReFrame","text":"

ReFrame will automatically use auto-detection when two conditions are met:

  1. The partitions section of you configuration file does not specify processor information for a particular partition (as per our recommendation in the previous section);
  2. The remote_detect option is enabled in the general part of the configuration, as follows:
    site_configuration = {\n    'systems': ...\n    'logging': ...\n    'general': [\n        {\n            'remote_detect': True,\n        }\n    ]\n}\n

To trigger the auto-detection of processor information, it is sufficient to let ReFrame list the available tests:

reframe --list\n

ReFrame will store the processor information for your system in ~/.reframe/topology/<system>-<partition>/processor.json.

"},{"location":"test-suite/ReFrame-configuration-file/#create-topology-file","title":"Create topology file","text":"

You can also use the reframe option --detect-host-topology to create the topology file yourself.

Run the following command on the cluster of which you need the topology.

reframe --detect-host-topology[=FILE]\n

The output will be put in a file if this is specified or printed in the output. It will look something like this:

{\n  \"arch\": \"skylake_avx512\",\n  \"topology\": {\n    \"numa_nodes\": [\n      \"0x111111111\",\n      \"0x222222222\",\n      \"0x444444444\",\n      \"0x888888888\"\n    ],\n    \"sockets\": [\n      \"0x555555555\",\n      \"0xaaaaaaaaa\"\n    ],\n    \"cores\": [\n      \"0x000000001\",\n      \"0x000000002\",\n      \"0x000000004\",\n      \"0x000000008\",\n      \"0x000000010\",\n      \"0x000000020\",\n      \"0x000000040\",\n      \"0x000000080\",\n      \"0x000000100\",\n      \"0x000000200\",\n      \"0x000000400\",\n      \"0x000000800\",\n      \"0x000001000\",\n      \"0x000002000\",\n      \"0x000004000\",\n      \"0x000008000\",\n      \"0x000010000\",\n      \"0x000020000\",\n      \"0x000040000\",\n      \"0x000080000\",\n      \"0x000100000\",\n      \"0x000200000\",\n      \"0x000400000\",\n      \"0x000800000\",\n      \"0x001000000\",\n      \"0x002000000\",\n      \"0x004000000\",\n      \"0x008000000\",\n      \"0x010000000\",\n      \"0x020000000\",\n      \"0x040000000\",\n      \"0x080000000\",\n      \"0x100000000\",\n      \"0x200000000\",\n      \"0x400000000\",\n      \"0x800000000\"\n    ],\n    \"caches\": [\n      {\n        \"type\": \"L2\",\n        \"size\": 1048576,\n        \"linesize\": 64,\n        \"associativity\": 16,\n        \"num_cpus\": 1,\n        \"cpusets\": [\n          \"0x000000001\",\n          \"0x000000002\",\n          \"0x000000004\",\n          \"0x000000008\",\n          \"0x000000010\",\n          \"0x000000020\",\n          \"0x000000040\",\n          \"0x000000080\",\n          \"0x000000100\",\n          \"0x000000200\",\n          \"0x000000400\",\n          \"0x000000800\",\n          \"0x000001000\",\n          \"0x000002000\",\n          \"0x000004000\",\n          \"0x000008000\",\n          \"0x000010000\",\n          \"0x000020000\",\n          \"0x000040000\",\n          \"0x000080000\",\n          \"0x000100000\",\n          \"0x000200000\",\n          \"0x000400000\",\n          \"0x000800000\",\n          \"0x001000000\",\n          \"0x002000000\",\n          \"0x004000000\",\n          \"0x008000000\",\n          \"0x010000000\",\n          \"0x020000000\",\n          \"0x040000000\",\n          \"0x080000000\",\n          \"0x100000000\",\n          \"0x200000000\",\n          \"0x400000000\",\n          \"0x800000000\"\n        ]\n      },\n      {\n        \"type\": \"L1\",\n        \"size\": 32768,\n        \"linesize\": 64,\n        \"associativity\": 8,\n        \"num_cpus\": 1,\n        \"cpusets\": [\n          \"0x000000001\",\n          \"0x000000002\",\n          \"0x000000004\",\n          \"0x000000008\",\n          \"0x000000010\",\n          \"0x000000020\",\n          \"0x000000040\",\n          \"0x000000080\",\n          \"0x000000100\",\n          \"0x000000200\",\n          \"0x000000400\",\n          \"0x000000800\",\n          \"0x000001000\",\n          \"0x000002000\",\n          \"0x000004000\",\n          \"0x000008000\",\n          \"0x000010000\",\n          \"0x000020000\",\n          \"0x000040000\",\n          \"0x000080000\",\n          \"0x000100000\",\n          \"0x000200000\",\n          \"0x000400000\",\n          \"0x000800000\",\n          \"0x001000000\",\n          \"0x002000000\",\n          \"0x004000000\",\n          \"0x008000000\",\n          \"0x010000000\",\n          \"0x020000000\",\n          \"0x040000000\",\n          \"0x080000000\",\n          \"0x100000000\",\n          \"0x200000000\",\n          \"0x400000000\",\n          \"0x800000000\"\n        ]\n      },\n      {\n        \"type\": \"L3\",\n        \"size\": 25952256,\n        \"linesize\": 64,\n        \"associativity\": 11,\n        \"num_cpus\": 18,\n        \"cpusets\": [\n          \"0x555555555\",\n          \"0xaaaaaaaaa\"\n        ]\n      }\n    ]\n  },\n  \"num_cpus\": 36,\n  \"num_cpus_per_core\": 1,\n  \"num_cpus_per_socket\": 18,\n  \"num_sockets\": 2\n}\n

Note

ReFrame 4.5.1 will generate more parameter than it can parse. To resolve this issue you can remove the following parameters: vendor, model and/or platform.

For ReFrame to find the topology file it needs to be in the following path ~/.reframe/topology/<system_name>-<partition_name>/processor.json

"},{"location":"test-suite/available-tests/","title":"Available tests","text":"

The EESSI test suite currently includes tests for:

For a complete overview of all available tests in the EESSI test suite, see the eessi/testsuite/tests subdirectory in the EESSI/test-suite GitHub repository.

"},{"location":"test-suite/available-tests/#gromacs","title":"GROMACS","text":"

Several tests for GROMACS, a software package to perform molecular dynamics simulations, are included, which use the systems included in the HECBioSim benchmark suite:

It is implemented in tests/apps/gromacs.py, on top of the GROMACS test that is included in the ReFrame test library hpctestlib.

To run this GROMACS test with all HECBioSim systems, use:

reframe --run --name GROMACS\n

To run this GROMACS test only for a specific HECBioSim system, use for example:

reframe --run --name 'GROMACS.*HECBioSim/hEGFRDimerPair'\n

To run this GROMACS test with the smallest HECBioSim system (Crambin), you can use the CI tag:

reframe --run --name GROMACS --tag CI\n
"},{"location":"test-suite/available-tests/#tensorflow","title":"TensorFlow","text":"

A test for TensorFlow, a machine learning framework, is included, which is based on the \"Multi-worker training with Keras\" TensorFlow tutorial.

It is implemented in tests/apps/tensorflow/.

To run this TensorFlow test, use:

reframe --run --name TensorFlow\n

Warning

This test requires TensorFlow v2.11 or newer, using an older TensorFlow version will not work!

"},{"location":"test-suite/available-tests/#osumicrobenchmarks","title":"OSU Micro-Benchmarks","text":"

A test for OSU Micro-Benchmarks, which provides an MPI benchmark.

It is implemented in tests/apps/osu.py.

To run this Osu Micro-Benchmark, use:

reframe --run --name OSU-Micro-Benchmarks\n

Warning

This test requires OSU Micro-Benchmarks v5.9 or newer, using an older OSU -Micro-Benchmark version will not work!

"},{"location":"test-suite/available-tests/#espresso","title":"ESPResSo","text":"

A test for ESPResSo, a software package for performing and analysing scientific molecular dynamics simulations.

It is implemented in tests/apps/espresso/.

2 test cases are included: * P3M (ionic crystals) * LJ (Lennard Jones particle box)

Both tests are weak scaling tests and therefore the number of particles are scaled based on the number of MPI ranks.

To run this ESPResSo test, use:

reframe --run --name ESPResSo\n
"},{"location":"test-suite/available-tests/#quantumespresso","title":"QuantumESPRESSO","text":"

A test for QuantumESPRESSO, an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).

It is implemented in tests/apps/QuantumESPRESSO.py.

To run this QuantumESPRESSO test, use:

reframe --run --name QuantumESPRESSO\n

Warning

This test requires ReFrame v4.6.0 or newer, in older versions the QuantumESPRESSO test is not included in hpctestlib!

"},{"location":"test-suite/installation-configuration/","title":"Installing and configuring the EESSI test suite","text":"

This page covers the requirements, installation and configuration of the EESSI test suite.

"},{"location":"test-suite/installation-configuration/#requirements","title":"Requirements","text":"

The EESSI test suite requires

"},{"location":"test-suite/installation-configuration/#installing-reframe","title":"Installing Reframe","text":"

General instructions for installing ReFrame are available in the ReFrame documentation. To check if ReFrame is available, run the reframe command:

reframe --version\n
(for more details on the ReFrame version requirement, click here)

Two important bugs were resolved in ReFrame's CPU autodetect functionality in version 4.3.3.

We strongly recommend you use ReFrame >= 4.3.3.

If you are using an older version of ReFrame, you may encounter some issues:

"},{"location":"test-suite/installation-configuration/#installing-reframe-test-library-hpctestlib","title":"Installing ReFrame test library (hpctestlib)","text":"

The EESSI test suite requires that the ReFrame test library (hpctestlib) is available, which is currently not included in a standard installation of ReFrame.

We recommend installing ReFrame using EasyBuild (version 4.8.1, or newer), or using a ReFrame installation that is available in the EESSI repository (version 2023.06, or newer).

For example (using EESSI):

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ReFrame/4.3.3\n

To check whether the ReFrame test library is available, try importing a submodule of the hpctestlib Python package:

python3 -c 'import hpctestlib.sciapps.gromacs'\n
"},{"location":"test-suite/installation-configuration/#installation","title":"Installation","text":"

To install the EESSI test suite, you can either use pip or clone the GitHub repository directly:

"},{"location":"test-suite/installation-configuration/#pip-install","title":"Using pip","text":"
pip install git+https://github.com/EESSI/test-suite.git\n
"},{"location":"test-suite/installation-configuration/#cloning-the-repository","title":"Cloning the repository","text":"
git clone https://github.com/EESSI/test-suite $HOME/EESSI-test-suite\ncd EESSI-test-suite\nexport PYTHONPATH=$PWD:$PYTHONPATH\n
"},{"location":"test-suite/installation-configuration/#verify-installation","title":"Verify installation","text":"

To check whether the EESSI test suite installed correctly, try importing the eessi.testsuite Python package:

python3 -c 'import eessi.testsuite'\n
"},{"location":"test-suite/installation-configuration/#configuration","title":"Configuration","text":"

Before you can run the EESSI test suite, you need to create a configuration file for ReFrame that is specific to the system on which the tests will be run.

Example configuration files are available in the config subdirectory of the EESSI/test-suite GitHub repository](https://github.com/EESSI/test-suite/tree/main/config), which you can use as a template to create your own.

"},{"location":"test-suite/installation-configuration/#configuring-reframe-environment-variables","title":"Configuring ReFrame environment variables","text":"

We recommend setting a couple of $RFM_* environment variables to configure ReFrame, to avoid needing to include particular options to the reframe command over and over again.

"},{"location":"test-suite/installation-configuration/#RFM_CONFIG_FILES","title":"ReFrame configuration file ($RFM_CONFIG_FILES)","text":"

(see also RFM_CONFIG_FILES in ReFrame docs)

Define the $RFM_CONFIG_FILES environment variable to instruct ReFrame which configuration file to use, for example:

export RFM_CONFIG_FILES=$HOME/EESSI-test-suite/config/example.py\n

Alternatively, you can use the --config-file (or -C) reframe option.

See the section on the ReFrame configuration file for more information.

"},{"location":"test-suite/installation-configuration/#search-path-for-tests-rfm_check_search_path","title":"Search path for tests ($RFM_CHECK_SEARCH_PATH)","text":"

(see also RFM_CHECK_SEARCH_PATH in ReFrame docs)

Define the $RFM_CHECK_SEARCH_PATH environment variable to tell ReFrame which directory to search for tests.

In addition, define $RFM_CHECK_SEARCH_RECURSIVE to ensure that ReFrame searches $RFM_CHECK_SEARCH_PATH recursively (i.e. so that also tests in subdirectories are found).

For example:

export RFM_CHECK_SEARCH_PATH=$HOME/EESSI-test-suite/eessi/testsuite/tests\nexport RFM_CHECK_SEARCH_RECURSIVE=1\n

Alternatively, you can use the --checkpath (or -c) and --recursive (or -R) reframe options.

"},{"location":"test-suite/installation-configuration/#RFM_PREFIX","title":"ReFrame prefix ($RFM_PREFIX)","text":"

(see also RFM_PREFIX in ReFrame docs)

Define the $RFM_PREFIX environment variable to tell ReFrame where to store the files it produces. E.g.

export RFM_PREFIX=$HOME/reframe_runs\n

This involves:

Note that the default is for ReFrame to use the current directory as prefix. We recommend setting a prefix so that logs are not scattered around and nicely appended for each run.

If our common logging configuration is used, the regular ReFrame log file will also end up in the location specified by $RFM_PREFIX.

Warning

Using the --prefix option in your reframe command is not equivalent to setting $RFM_PREFIX, since our common logging configuration only picks up on the $RFM_PREFIX environment variable to determine the location for the ReFrame log file.

"},{"location":"test-suite/release-notes/","title":"Release notes for EESSI test suite","text":""},{"location":"test-suite/release-notes/#030-27-june-2024","title":"0.3.0 (27 june 2024)","text":"

This is a minor release of the EESSI test-suite

It includes:

"},{"location":"test-suite/release-notes/#020-7-march-2024","title":"0.2.0 (7 march 2024)","text":"

This is a minor release of the EESSI test-suite

It includes:

Bug fixes:

"},{"location":"test-suite/release-notes/#010-5-october-2023","title":"0.1.0 (5 October 2023)","text":"

Version 0.1.0 is the first release of the EESSI test suite.

It includes:

"},{"location":"test-suite/usage/","title":"Using the EESSI test suite","text":"

This page covers the usage of the EESSI test suite.

We assume you have already installed and configured the EESSI test suite on your system.

"},{"location":"test-suite/usage/#listing-available-tests","title":"Listing available tests","text":"

To list the tests that are available in the EESSI test suite, use reframe --list (or reframe -L for short).

If you have properly configured ReFrame, you should see a (potentially long) list of checks in the output:

$ reframe --list\n...\n[List of matched checks]\n- ...\nFound 123 check(s)\n

Note

When using --list, checks are only generated based on modules that are available in the system where the reframe command is invoked.

The system partitions specified in your ReFrame configuration file are not taken into account when using --list.

So, if --list produces an overview of 50 checks, and you have 4 system partitions in your configuration file, actually running the test suite may result in (up to) 200 checks being executed.

"},{"location":"test-suite/usage/#dry-run","title":"Performing a dry run","text":"

To perform a dry run of the EESSI test suite, use reframe --dry-run:

$ reframe --dry-run\n...\n[==========] Running 1234 check(s)\n\n[----------] start processing checks\n[ DRY      ] GROMACS_EESSI ...\n...\n[----------] all spawned checks have finished\n\n[  PASSED  ] Ran 1234/1234 test case(s) from 1234 check(s) (0 failure(s), 0 skipped, 0 aborted)\n

Note

When using --dry-run, the systems partitions listed in your ReFrame configuration file are also taken into account when generating checks, next to available modules and test parameters, which is not the case when using --list.

"},{"location":"test-suite/usage/#running-the-full-test-suite","title":"Running the (full) test suite","text":"

To actually run the (full) EESSI test suite and let ReFrame produce a performance report, use reframe --run --performance-report.

We strongly recommend filtering the checks that will be run by using additional options like --system, --name, --tag (see the 'Filtering tests' section below), and doing a dry run first to make sure that the generated checks correspond to what you have in mind.

"},{"location":"test-suite/usage/#reframe-output-and-log-files","title":"ReFrame output and log files","text":"

ReFrame will generate various output and log files:

We strongly recommend controlling where these files go by using the common logging configuration that is provided by the EESSI test suite in your ReFrame configuration file and setting $RFM_PREFIX (avoid using the cmd line option --prefix).

If you do, and if you use ReFrame v4.3.3 or more newer, you should find the output and log files at:

In the stage and output directories, there will be a subdirectory for each check that was run, which are tagged with a unique hash (like d3adb33f) that is determined based on the specific parameters for that check (see the ReFrame documentation for more details on the test naming scheme).

"},{"location":"test-suite/usage/#filtering-tests","title":"Filtering tests","text":"

By default, ReFrame will automatically generate checks for each system partition, based on the tests available in the EESSI test suite, available software modules, and tags defined in the EESSI test suite.

To avoid being overwhelmed by checks, it is recommend to apply filters so ReFrame only generates the checks you are interested in.

"},{"location":"test-suite/usage/#filter-name","title":"Filtering by test name","text":"

You can filter checks based on the full test name using the --name option (or -n), which includes the value for all test parameters.

Here's an example of a full test name:

GROMACS_EESSI %benchmark_info=HECBioSim/Crambin %nb_impl=cpu %scale=1_node %module_name=GROMACS/2023.1-foss-2022a /d3adb33f @example:gpu+default\n

To let ReFrame only generate checks for GROMACS, you can use:

reframe --name GROMACS\n

To only run GROMACS checks with a particular version of GROMACS, you can use --name to only retain specific GROMACS modules:

reframe --name %module_name=GROMACS/2023.1\n

Likewise, you can filter on any part of the test name.

You can also select one specific check using the corresponding test hash, which is also part of the full test name (see /d3adb33f in the example above): for example:

reframe --name /d3adb33f\n

The argument passed to --name is interpreted as a Python regular expression, so you can use wildcards like .*, character ranges like [0-9], use ^ to specify that the pattern should match from the start of the test name, etc.

Use --list or --dry-run to check the impact of using the --name option.

"},{"location":"test-suite/usage/#filter-system-partition","title":"Filtering by system (partition)","text":"

By default, ReFrame will generate checks for each system partition that is listed in your configuration file.

To only let ReFrame checks for a particular system or system partition, you can use the --system option.

For example:

Use --dry-run to check the impact of using the --system option.

"},{"location":"test-suite/usage/#filter-tag","title":"Filtering by tags","text":"

To filter tests using one or more tags, you can use the --tag option.

Using --list-tags you can get a list of known tags.

To check the impact of this on generated checks by ReFrame, use --list or --dry-run.

"},{"location":"test-suite/usage/#ci-tag","title":"CI tag","text":"

For each software that is included in the EESSI test suite, a small test is tagged with CI to indicate it can be used in a Continuous Integration (CI) environment.

Hence, you can use this tag to let ReFrame only generate checks for small test cases:

reframe --tag CI\n

For example:

$ reframe --name GROMACS --tag CI\n...\n
"},{"location":"test-suite/usage/#scale-tags","title":"scale tags","text":"

The EESSI test suite defines a set of custom tags that control the scale of checks, which specify many cores/GPUs/nodes should be used for running a check. The number of cores and GPUs serves as an upper limit; the actual count depends on the specific configuration of cores, GPUs, and sockets within the node, as well as the specific test being carried out.

tag name description 1_core using 1 CPU core 1 GPU 2_cores using 2 CPU cores and 1 GPU 4_cores using 4 CPU cores and 1 GPU 1cpn_2nodes using 1 CPU core per node, 1 GPU per node, and 2 nodes 1cpn_4nodes using 1 CPU core per node, 1 GPU per node, and 4 nodes 1_8_node using 1/8th of a node (12.5% of available cores/GPUs, 1 at minimum) 1_4_node using a quarter of a node (25% of available cores/GPUs, 1 at minimum) 1_2_node using half of a node (50% of available cores/GPUs, 1 at minimum) 1_node using a full node (all available cores/GPUs) 2_nodes using 2 full nodes 4_nodes using 4 full nodes 8_nodes using 8 full nodes 16_nodes using 16 full nodes"},{"location":"test-suite/usage/#using-multiple-tags","title":"Using multiple tags","text":"

To filter tests using multiple tags, you can:

"},{"location":"test-suite/usage/#example-commands","title":"Example commands","text":"

Running all GROMACS tests on 4 cores on the cpu partition

reframe --run --system example:cpu --name GROMACS --tag 4_cores --performance-report\n

List all checks for TensorFlow 2.11 using a single node

reframe --list --name %module_name=TensorFlow/2.11 --tag 1_node\n

Dry run of TensorFlow CI checks on a quarter (1/4) of a node (on all system partitions)

reframe --dry-run --name 'TensorFlow.*CUDA' --tag 1_4_node --tag CI\n
"},{"location":"test-suite/usage/#overriding-test-parameters-advanced","title":"Overriding test parameters (advanced)","text":"

You can override test parameters using the --setvar option (or -S).

This can be done either globally (for all tests), or only for specific tests (which is recommended when using --setvar).

For example, to run all GROMACS checks with a specific GROMACS module, you can use:

reframe --setvar GROMACS_EESSI.modules=GROMACS/2023.1-foss-2022a ...\n

Warning

We do not recommend using --setvar, since it is quite easy to make unintended changes to test parameters this way that can result in broken checks.

You should try filtering tests using the --name or --tag options instead.

"},{"location":"test-suite/writing-portable-tests/","title":"Writing portable tests","text":"

This page is a tutorial on how to write a new test for the EESSI test suite.

If you already know how to write regular ReFrame tests, we suggest you read the High-level overview and Test requirements sections, then skip ahead to Step 3: implementing as a portable ReFrame test.

"},{"location":"test-suite/writing-portable-tests/#high-level-overview","title":"High-level overview","text":"

In this tutorial, you will learn how to write a test for the EESSI test suite. It is important to realize in which context the test suite will be run. Roughly speaking, there are three uses:

The test suite contains a combination of real-life use cases for end-user scientific software (e.g. tests for GROMACS, TensorFlow, CP2K, OpenFOAM, etc) and low level tests (e.g. OSU Microbenchmarks).

The tests in the EESSI test suite are developed using the ReFrame HPC testing framework. Typically, ReFrame tests hardcode system specific information (core counts, performance references, etc) in the test definition. The EESSI test suite aims to be portable, and implements a mixin class that invokes a series of standard hooks to replace information that is typically hardcoded. All system-specific information is then limited to the ReFrame configuration file. As an example: rather than hardcoding that a test should run with 128 tasks (i.e. because a system has 128 core nodes), the EESSI test suite has a hook that can define a test should be run on a \"single, full node\". The hook queries the ReFrame configuration file for the amount of cores per node, and specifies this number as the corresponding amount of tasks. Thus, on a 64-core node, this test would run with 64 tasks, while on a 128-core node, it would run 128 tasks.

"},{"location":"test-suite/writing-portable-tests/#test-requirements","title":"Test requirements","text":"

To be useful in the aforementioned scenarios, tests need to satisfy a number of requirements.

"},{"location":"test-suite/writing-portable-tests/#step-by-step-tutorial-for-writing-a-portable-reframe-test","title":"Step-by-step tutorial for writing a portable ReFrame test","text":"

In the next section, we will show how to write a test for the EESSI test suite by means of an example: we will create a test for mpi4py that executes an MPI_REDUCE call to sum the ranks of all processes. If you're unfamiliar with MPI or mpi4py, or want to see the exact code this test will run, you may want to read Background of the mpi4py test before proceeding. The complete test developed in this tutorial can be found in the tutorials/mpi4py directory in of the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#step-1-writing-job-scripts-to-execute-tests","title":"Step 1: writing job scripts to execute tests","text":"

Although not strictly needed for the implementation of a ReFrame test, it is useful to try and write a job script for how you would want to run this test on a given system. For example, on a system with 128-core nodes, managed by SLURM, we might have the following job scripts to execute the mpi4py_reduce.py code.

To run on 2 cores:

#!/bin/bash\n#SBATCH --ntasks=2  # 2 tasks, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 2 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n
To run on one full node:
#!/bin/bash\n#SBATCH --ntasks=128  # min. 2 tasks in total, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --ntasks-per-node=128\n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 128 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n
To run on two full nodes
#!/bin/bash\n#SBATCH --ntasks=256 # min. 2 tasks in total, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --ntasks-per-node=128 \n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 256 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n

Clearly, such job scripts are not very portable: these only work on SLURM systems, we had to duplicate a lot to run on different scales, we would have to duplicate even more if we wanted to test multiple mpi4py versions, etc. This is where ReFrame comes in: it has support for different schedulers, and allows one to easily specify a range of parameters (such as the number of tasks in the above example) to create tests for.

"},{"location":"test-suite/writing-portable-tests/#step-2-implementing-as-a-non-portable-reframe-test","title":"Step 2: implementing as a non-portable ReFrame test","text":"

First, let us implement this as a non-portable test in ReFrame. This code can be found under tutorials/mpi4py/mpi4py_system_specific.py in the EESSI test suite repository. We will not elaborate on how to write ReFrame tests, it is well-documented in the official ReFrame documentation. We have put extensive comments in the test definition below, to make it easier to understand when you have limited familiarity with ReFrame. Whenever the variables below have a specific meaning in ReFrame, we referenced the official documentation:

\"\"\"\nThis module tests mpi4py's MPI_Reduce call\n\"\"\"\n\nimport reframe as rfm\nimport reframe.utility.sanity as sn\n\n# added only to make the linter happy\nfrom reframe.core.builtins import variable, parameter, run_after, performance_function, sanity_function\n\n\n# This python decorator indicates to ReFrame that this class defines a test\n# Our class inherits from rfm.RunOnlyRegressionTest, since this test does not have a compilation stage\n# https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RunOnlyRegressionTest\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest):\n    # Programming environments are only relevant for tests that compile something\n    # Since we are testing existing modules, we typically don't compile anything and simply define\n    # 'default' as the valid programming environment\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.valid_prog_environs\n    valid_prog_environs = ['default']\n\n    # Typically, we list here the name of our cluster as it is specified in our ReFrame configuration file\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.valid_systems\n    valid_systems = ['snellius']\n\n    # ReFrame will generate a test for each module\n    # NOTE: each parameter adds a new dimension to the parametrization space. \n    # (EG 4 parameters with (3,3,2,2) possible values will result in 36 tests).\n    # Be mindful of how many parameters you add to avoid the number of tests generated being excessive.\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.parameter\n    module_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\n\n    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n\n    # Our script has two arguments, --n_iter and --n_warmup. By defining these as ReFrame variables, we can\n    # enable the end-user to overwrite their value on the command line when invoking ReFrame.\n    # Note that we don't typically expose ALL variables, especially if a script has many - we expose\n    # only those that we think an end-user might want to overwrite\n    # Number of iterations to run (more iterations takes longer, but results in more accurate timing)\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.variable\n    n_iterations = variable(int, value=1000)\n\n    # Similar for the number of warmup iterations\n    n_warmup = variable(int, value=100)\n\n    # Define which executable to run\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.executable\n    executable = 'python3'\n\n    # Define which options to pass to the executable\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.executable_opts\n    executable_opts = ['mpi4py_reduce.py', '--n_iter', f'{n_iterations}', '--n_warmup', f'{n_warmup}']\n\n    # Define a time limit for the scheduler running this test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.time_limit\n    time_limit = '5m00s'\n\n    # Using this decorator, we tell ReFrame to run this AFTER the init step of the test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.run_after\n    # See https://reframe-hpc.readthedocs.io/en/stable/pipeline.html for all steps in the pipeline\n    # that reframe uses to execute tests. Note that after the init step, ReFrame has generated test instances for each\n    # of the combinations of parameters above. Thus, now, there are 6 instances (2 module names * 3 scales). Here,\n    # we set the modules to load equal to one of the module names\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.modules\n    @run_after('init')\n    def set_modules(self):\n        self.modules = [self.module_name]\n\n    # Similar for the scale, we now set the number of tasks equal to the scale for this instance\n    @run_after('init')\n    def define_task_count(self):\n        # Set the number of tasks, self.scale is now a single number out of the parameter list\n        # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks\n        self.num_tasks = self.scale\n        # Set the number of tasks per node to either be equal to the number of tasks, but at most 128,\n        # since we have 128-core nodes\n        # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks_per_node\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n\n    # Now, we check if the pattern 'Sum of all ranks: X' with X the correct sum for the amount of ranks is found\n    # in the standard output:\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.sanity_function\n    @sanity_function\n    def validate(self):\n        # Sum of 0, ..., N-1 is (N * (N-1) / 2)\n        sum_of_ranks = round(self.scale * ((self.scale - 1) / 2))\n        # https://reframe-hpc.readthedocs.io/en/stable/deferrable_functions_reference.html#reframe.utility.sanity.assert_found\n        return sn.assert_found(r'Sum of all ranks: %s' % sum_of_ranks, self.stdout)\n\n    # Now, we define a pattern to extract a number that reflects the performance of this test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.performance_function\n    @performance_function('s')\n    def time(self):\n        # https://reframe-hpc.readthedocs.io/en/stable/deferrable_functions_reference.html#reframe.utility.sanity.extractsingle\n        return sn.extractsingle(r'^Time elapsed:\\s+(?P<perf>\\S+)', self.stdout, 'perf', float)\n

This single test class will generate 6 test instances: tests with 2, 128 and 256 tasks for each of the two modules, respectively. It will check the sum of ranks produced at the end in the output, which is how ReFrame will validate that the test ran correctly. Finally, it will also print the performance number that was extracted by the performance_function.

This test works, but is not very portable. If we move to a system with 192 cores per node, the current scale parameter is a bit awkward. The test would still run, but we wouldn't have a test instance that just tests this on a full (single) node or two full nodes. Furthermore, if we add a new mpi4py module in EESSI, we would have to alter the test to add the name to the list, since the module names are hardcoded in this test.

"},{"location":"test-suite/writing-portable-tests/#as-portable-reframe-test","title":"Step 3: implementing as a portable ReFrame test","text":"

In step 2, there were several system-specific items in the test. In this section, we will show how we use inheritance from the EESSI_Mixin class to avoid hard-coding system specific information. The full final test can be found under tutorials/mpi4py/mpi4py_portable_mixin.py in the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#how-eessi_mixin-works","title":"How EESSI_Mixin works","text":"

The EESSI_Mixin class provides standardized functionality that should be useful to all tests in the EESSI test-suite. One of its key functions is to make sure tests dynamically try to determine sensible values for the things that were system specific in Step 2. For example, instead of hard coding a task count, the test inheriting from EESSI_Mixin will determine this dynamically based on the amount of available cores per node, and a declaration from the inheriting test class about how you want to instantiate tasks.

To illustrate this, suppose you want to launch your test with one task per CPU core. In that case, your test (that inherits from EESSI_Mixin) only has to declare

compute_unit = COMPUTE_UNIT[CPU]\n

The EESSI_Mixin class then takes care of querying the ReFrame config file for the cpu topology of the node, and setting the correct number of tasks per node.

Another feature is that it sets defaults for a few items, such as the valid_prog_environs = ['default']. These will likely be the same for most tests in the EESSI test suite, and when they do need to be different, one can easily overwrite them in the child class.

Most of the functionality in the EESSI_Mixin class require certain class attributes (such as the compute_unit above) to be set by the child class, so that the EESSI_Mixin class can use those as input. It is important that these attributes are set before the stage in which the EESSI_Mixin class needs them (see the stages of the ReFrame regression pipeline). To support test developers, the EESSI_Mixin class checks if these attributes are set, and gives verbose feedback in case any attributes are missing.

"},{"location":"test-suite/writing-portable-tests/#inheriting-from-eessi_mixin","title":"Inheriting from EESSI_Mixin","text":"

The first step is to actually inherit from the EESSI_Mixin class:

from eessi.testsuite.eessi_mixin import EESSI_Mixin\n...\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest, EESSI_Mixin):\n
"},{"location":"test-suite/writing-portable-tests/#removing-hard-coded-test-scales","title":"Removing hard-coded test scales","text":"

First, we remove

    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n
from the test. The EESSI_Mixin class will define the default set of scales on which this test will be run as
from eessi.testsuite.constants import SCALES\n...\n    scale = parameter(SCALES.keys())\n

This ensures the test will run a test case for each of the default scales, as defined by the SCALES constant.

If, and only if, your test can not run on all of those scales should you overwrite this parameter in your child class. For example, if you have a test that does not support running on multiple nodes, you could define a filtering function outside of the class

def filter_scales():\n    return [\n        k for (k,v) in SCALES.items()\n        if v['num_nodes'] == 1\n    ]\n
and then in the class body overwrite the scale parameter with a subset of items from the SCALES constant:
    scale = parameter(filter_scales())\n

Next, we also remove

   @run_after('init')\n    def define_task_count(self):\n        self.num_tasks = self.scale\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n

as num_tasks and and num_tasks_per_node will be set by the assign_tasks_per_compute_unit hook, which is invoked by the EESSI_Mixin class.

Instead, we only set the compute_unit. The number of launched tasks will be equal to the number of compute units. E.g.

    compute_unit = COMPUTE_UNIT[CPU]\n
will launch one task per (physical) CPU core. Other options are COMPUTE_UNIT[HWTHREAD] (one task per hardware thread), COMPUTE_UNIT[NUMA_NODE] (one task per numa node), COMPUTE_UNIT[CPU_SOCKET] (one task per CPU socket), COMPUTE_UNIT[GPU] (one task per GPU) and COMPUTE_UNIT[NODE] (one task per node). Check the COMPUTE_UNIT constant for the full list of valid compute units. The number of cores per task will automatically be set based on this as the ratio of the number of cores in a node to the number of tasks per node (rounded down). Additionally, the EESSI_Mixin class will set the OMP_NUM_THREADS environment variable equal to the number of cores per task.

Note

compute_unit needs to be set before (or in) ReFrame's setup phase. For the different phases of the pipeline, please see the documentation on how ReFrame executes tests.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-module-names","title":"Replacing hard-coded module names","text":"

Instead of hard-coding a module name, we parameterize over all module names that match a certain regular expression.

from eessi.testsuite.utils import find_modules\n...\n    module_name = parameter(find_modules('mpi4py'))\n

This parameter generates all module names available on the current system matching the expression, and each test instance will load the respective module before running the test.

Furthermore, we remove the hook that sets self.module:

@run_after('init')\ndef set_modules(self):\n    self.modules = [self.module_name]\n
This is now taken care of by the EESSI_Mixin class.

Note

module_name needs to be set before (or in) ReFrame's init phase

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-system-names-and-programming-environments","title":"Replacing hard-coded system names and programming environments","text":"

First, we remove the hard-coded system name and programming environment. I.e. we remove

    valid_prog_environs = ['default']\n    valid_systems = ['snellius']\n
The EESSI_Mixin class sets valid_prog_environs = ['default'] by default, so that is no longer needed in the child class (but it can be overwritten if needed). The valid_systems is instead replaced by a declaration of what type of device type is needed. We'll create an mpi4py test that runs on CPUs only:
    device_type = DEVICE_TYPES[CPU]\n
but note if we would have wanted to also generate test instances to test GPU <=> GPU communication, we could have defined this as a parameter:
    device_type = parameter([DEVICE_TYPES[CPU], DEVICE_TYPES[GPU]])\n

The device type that is set will be used by the filter_valid_systems_by_device_type hook to check in the ReFrame configuration file which of the current partitions contain the relevant device. Typically, we don't set the DEVICE_TYPES[CPU] on a GPU partition in the ReFrame configuration, so that we skip all CPU-only tests on GPU nodes. Check the DEVICE_TYPES constant for the full list of valid compute units.

EESSI_Mixin also filters based on the supported scales, which can again be configured per partition in the ReFrame configuration file. This can e.g. be used to avoid running large-scale tests on partitions that don't have enough nodes to run them.

Note

device_type needs to be set before (or in) ReFrame's init phase

"},{"location":"test-suite/writing-portable-tests/#requesting-sufficient-ram-memory","title":"Requesting sufficient RAM memory","text":"

To make sure you get an allocation with sufficient memory, your test should declare how much memory per node it needs by defining a required_mem_per_node function in your test class that returns the required memory per node (in MiB). Note that the amount of required memory generally depends on the amount of tasks that are launched per node (self.num_tasks_per_node).

Our mpi4py test takes around 200 MB when running with a single task, plus about 70 MB for every additional task. We round this up a little so that we can be sure the test won't run out of memory if memory consumption is slightly different on a different system. Thus, we define:

def required_mem_per_node(self):\n    return self.num_tasks_per_node * 100 + 250\n

While rounding up is advisable, do keep your estimate realistic. Too high a memory request will mean the test will get skipped on systems that cannot satisfy that memory request. Most HPC systems have at least 1 GB per core, and most laptop/desktops have at least 8 GB total. Designing a test so that it fits within those memory constraints will ensure it can be run almost anywhere.

Note

The easiest way to get the memory consumption of your test at various task counts is to execute it on a system which runs jobs in cgroups, define measure_memory_usage = True in your class body, and make the required_mem_per_node function return a constant amount of memory equal to the available memory per node on your test system. This will cause the EESSI_Mixin class to read out the maximum memory usage of the cgroup (on the head node of your allocation, in case of multi-node tests) and report it as a performance number.

"},{"location":"test-suite/writing-portable-tests/#process-binding","title":"Process binding","text":"

The EESSI_Mixin class binds processes to their respective number of cores automatically using the hooks.set_compact_process_binding hook. E.g. for a pure MPI test like mpi4py, each task will be bound to a single core. For hybrid tests that do both multiprocessing and multithreading, tasks are bound to a sequential number of cores. E.g. on a node with 128 cores and a hybrid test with 64 tasks and 2 threads per task, the first task will be bound to core 0 and 1, second task to core 2 and 3, etc. To override this behaviour, one would have to overwrite the

@run_after('setup')\ndef assign_tasks_per_compute_unit(self):\n    ...\n
function. Note that this function also calls other hooks (such as hooks.assign_task_per_compute_unit) that you probably still want to invoke. Check the EESSI_Mixin class definition to see which hooks you still want to call.

"},{"location":"test-suite/writing-portable-tests/#ci-tag","title":"CI Tag","text":"

As mentioned in the Test requirements, there should be at least one light-weight (short, low-core, low-memory) test case, which should be marked with the CI tag. The EESSI_Mixin class will automatically add the CI tag if both bench_name (the current variant) and bench_name_ci (the CI variant) are defined. The mpi4py test contains only one test case (which is very light-weight). In this case, it is sufficient to set both to the same name in the class body:

bench_name = 'mpi4pi'\nbench_name_ci = 'mpi4pi'\n

Suppose that our test has 2 variants, of which only 'variant1' should be marked CI. In that case, we can define bench_name as a parameter:

    bench_name = parameter(['variant1', 'variant2'])\n    bench_name_ci = 'variant1'\n
Next, we can define a hook that does different things depending on the variant, for example:
@run_after('init')\ndef do_something(self):\n    if self.bench_name == 'variant1':\n        do_this()\n    elif self.bench_name == 'variant2':\n        do_that()\n

"},{"location":"test-suite/writing-portable-tests/#thread-binding-optional","title":"Thread binding (optional)","text":"

Thread binding is not done by default, but can be done by invoking the hooks.set_compact_thread_binding hook:

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_thread_binding(self)\n

"},{"location":"test-suite/writing-portable-tests/#skipping-test-instances","title":"Skipping test instances when required (optional)","text":"

Preferably, we prevent test instances from being generated (i.e. before ReFrame's setup phase) if we know that they cannot run on a certain system. However, sometimes we need information on the nodes that will run it, which is only available after the setup phase. That is the case for anything where we need information from e.g. the reframe.core.pipeline.RegressionTest.current_partition.

For example, we might know that a test only scales to around 300 tasks, and above that, execution time increases rapidly. In that case, we'd want to skip any test instance that results in a larger amount of tasks, but we only know this after assign_tasks_per_compute_unit has been called (which is done by EESSI_Mixin in after the setup stage). For example, the 2_nodes scale would run fine on systems with 128 cores per node, but would exceed the task limit of 300 on systems with 192 cores per node.

We can skip any generated test cases using the skip_if function. For example, to skip the test if the total task count exceeds 300, we'd need to call skip_if after the setup stage (so that self.num_tasks is already set):

@run_after('setup')\n    hooks.assign_tasks_per_compute_unit(test=self, compute_unit=COMPUTE_UNIT[CPU])\n\n    max_tasks = 300\n    self.skip_if(self.num_tasks > max_tasks,\n                 f'Skipping test: more than {max_tasks} tasks are requested ({self.num_tasks})')\n

The mpi4py test scales up to a very high core count, but if we were to set it for the sake of this example, one would see:

[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=16_nodes /38aea144 @snellius:genoa+default\n[     SKIP ] ( 1/13) Skipping test: more than 300 tasks are requested (3072)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=8_nodes /bfc4d3d4 @snellius:genoa+default\n[     SKIP ] ( 2/13) Skipping test: more than 300 tasks are requested (1536)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_nodes /8de369bc @snellius:genoa+default\n[     SKIP ] ( 3/13) Skipping test: more than 300 tasks are requested (768)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_nodes /364146ba @snellius:genoa+default\n[     SKIP ] ( 4/13) Skipping test: more than 300 tasks are requested (384)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_node /8225edb3 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_2_node /4acf483a @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_4_node /fc3d689b @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_8_node /73046a73 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_4nodes /f08712a2 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_2nodes /23cd550b @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_cores /bb8e1349 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_cores /4c0c7c9e @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_core /aa83ba9e @snellius:genoa+default\n\n...\n
on a system with 192 cores per node. I.e. any test of 2 nodes (384 cores) or above would be skipped because it exceeds our max task count.

"},{"location":"test-suite/writing-portable-tests/#setting-a-time-limit-optional","title":"Setting a time limit (optional)","text":"

By default, the EESSI_Mixin class sets a time limit for jobs of 1 hour. You can overwrite this in your child class:

time_limit = '5m00s'\n
For the appropriate string formatting, please check the ReFrame documentation on time_limit. We already had this in the non-portable version of our mpi4py test and will keep it in the portable version: since this is a very quick test, specifying a lower time limit will help in getting the jobs scheduled more quickly.

Note that for the test to be portable, the time limit should be set such that it is sufficient regardless of node architecture and scale. It is pretty hard to guarantee this with a single, fixed time limit, without knowing upfront what architecture the test will be run on, and thus how many tasks will be launched. For strong scaling tests, you might want a higher time limit for low task counts, whereas for weak scaling tests you might want a higher time limit for higher task counts. To do so, you can consider setting the time limit after setup, and making it dependent on the task count.

Suppose we have a weak scaling test that takes 5 minutes with a single task, and 60 minutes with 10k tasks. We can set a time limit based on linear interpolation between those task counts:

@run_after('setup')\ndef set_time_limit(self):\n    # linearly interpolate between the single and 10k task count\n    minutes = 5 + self.num_tasks * ((60-5) / 10000)\n    self.time_limit = f'{minutes}m00s'\n
Note that this is typically an overestimate of how long the test will take for intermediate task counts, but that's ok: we'd rather overestimate than underestimate the runtime.

To be even safer, one could consider combining this with logic to skip tests if the 10k task count is exceeded.

"},{"location":"test-suite/writing-portable-tests/#summary","title":"Summary","text":"

To make the test portable, we added additional imports:

from eessi.testsuite.eessi_mixin import EESSI_Mixin\nfrom eessi.testsuite.constants import COMPUTE_UNIT, DEVICE_TYPES, CPU\nfrom eessi.testsuite.utils import find_modules\n

Made sure the test inherits from EESSI_Mixin:

@rfm.simple_test\nclass EESSI_MPI4PY(rfm.runOnlyRegressionTest, EESSI_Mixin):\n

Removed the following from the class body:

valid_prog_environs = ['default']\nvalid_systems = ['snellius']\n\nmodule_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\nscale = parameter([2, 128, 256])\n

Added the following to the class body:

device_type = DEVICE_TYPES[CPU]\ncompute_unit = COMPUTE_UNIT[CPU]\n\nmodule_name = parameter(find_modules('mpi4py'))\n

Defined the class method:

def required_mem_per_node(self):\n    return self.num_tasks_per_node * 100 + 250\n

Removed the ReFrame pipeline hook that sets self.modules:

@run_after('init')\ndef set_modules(self):\n     self.modules = [self.module_name]\n

Removed the ReFrame pipeline hook that sets the number of tasks and number of tasks per node:

@run_after('init')\ndef define_task_count(self):\n    # Set the number of tasks, self.scale is now a single number out of the parameter list\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks\n    self.num_tasks = self.scale\n    # Set the number of tasks per node to either be equal to the number of tasks, but at most 128,\n    # since we have 128-core nodes\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks_per_node\n    self.num_tasks_per_node = min(self.num_tasks, 128)\n

The final test is thus:

\"\"\"\nThis module tests mpi4py's MPI_Reduce call\n\"\"\"\n\nimport reframe as rfm\nimport reframe.utility.sanity as sn\n\nfrom reframe.core.builtins import variable, parameter, run_after, performance_function, sanity_function\n\nfrom eessi.testsuite.eessi_mixin import EESSI_Mixin\nfrom eessi.testsuite.constants import COMPUTE_UNIT, DEVICE_TYPES, CPU\nfrom eessi.testsuite.utils import find_modules\n\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest, EESSI_Mixin):\n    device_type = DEVICE_TYPES[CPU]\n    compute_unit = COMPUTE_UNIT[CPU]\n\n    module_name = parameter(find_modules('mpi4py'))\n\n    n_iterations = variable(int, value=1000)\n    n_warmup = variable(int, value=100)\n\n    executable = 'python3'\n    executable_opts = ['mpi4py_reduce.py', '--n_iter', f'{n_iterations}', '--n_warmup', f'{n_warmup}']\n\n    time_limit = '5m00s'\n\n    def required_mem_per_node(self):\n        return self.num_tasks_per_node * 100 + 250\n\n    @sanity_function\n    def validate(self):\n        sum_of_ranks = round(self.num_tasks * ((self.num_tasks - 1) / 2))\n        return sn.assert_found(r'Sum of all ranks: %s' % sum_of_ranks, self.stdout)\n\n    @performance_function('s')\n    def time(self):\n        return sn.extractsingle(r'^Time elapsed:\\s+(?P<perf>\\S+)', self.stdout, 'perf', float)\n

Note that with only 34 lines of code, this is now very quick and easy to write, because of the default behaviour from the EESSI_Mixin class.

"},{"location":"test-suite/writing-portable-tests/#background-of-mpi4py-test","title":"Background of the mpi4py test","text":"

To understand what this test does, you need to know some basics of MPI. If you know about MPI, you can skip this section.

The MPI standard defines how to communicate between multiple processes that work on a common computational task. Each process that is part of the computational task gets a unique identifier (0 to N-1 for N processes), the MPI rank, which can e.g. be used to distribute a workload. The MPI standard defines communication between two given processes (so-called point-to-point communication), but also between a set of N processes (so-called collective communication).

An example of such a collective operation is the MPI_REDUCE call. It reduces data elements from multiple processes with a certain operation, e.g. it takes the sum of all elements or multiplication of all elements.

"},{"location":"test-suite/writing-portable-tests/#the-mpi4py-test","title":"The mpi4py test","text":"

In this example, we will implement a test that does an MPI_Reduce on the rank, using the MPI.SUM operation. This makes it easy to validate the result, as we know that for N processes, the theoretical sum of all ranks (0, 1, ... N-1) is (N * (N-1) / 2).

Our initial code is a python script mpi4py_reduce.py, which can be found in tutorials/mpi4py/src/mpi4py_reduce.py in the EESSI test suite repository:

#!/usr/bin/env python\n\"\"\"\nMPI_Reduce on MPI rank. This should result in a total of (size * (size - 1) / 2),\nwhere size is the total number of ranks.\nPrints the total number of ranks, the sum of all ranks, and the time elapsed for the reduction.\"\n\"\"\"\n\nimport argparse\nimport time\n\nfrom mpi4py import MPI\n\nparser = argparse.ArgumentParser(description='mpi4py reduction benchmark',\n                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)\nparser.add_argument('--n_warmup', type=int, default=100,\n                    help='Number of warmup iterations')\nparser.add_argument('--n_iter', type=int, default=1000,\n                    help='Number of benchmark iterations')\nargs = parser.parse_args()\n\nn_warmup = args.n_warmup\nn_iter = args.n_iter\n\nsize = MPI.COMM_WORLD.Get_size()\nrank = MPI.COMM_WORLD.Get_rank()\nname = MPI.Get_processor_name()\n\n# Warmup\nt0 = time.time()\nfor i in range(n_warmup):\n    total = MPI.COMM_WORLD.reduce(rank, op=MPI.SUM)\n\n# Actual reduction, multiple iterations for accuracy of timing\nt1 = time.time()\nfor i in range(n_iter):\n    total = MPI.COMM_WORLD.reduce(rank, op=MPI.SUM)\nt2 = time.time()\ntotal_time = (t2 - t1) / n_iter\n\nif rank == 0:\n    print(f\"Total ranks: {size}\")\n    print(f\"Sum of all ranks: {total}\")  # Should be (size * (size-1) / 2)\n    print(f\"Time elapsed: {total_time:.3e}\")\n

Assuming we have mpi4py available, we could run this manually using

$ mpirun -np 4 python3 mpi4py_reduce.py\nTotal ranks: 4\nSum of all ranks: 6\nTime elapsed: 3.609e-06\n

This started 4 processes, with ranks 0, 1, 2, 3, and then summed all the ranks (0+1+2+3=6) on the process with rank 0, which finally printed all this output. The whole reduction operation is performed n_iter times, so that we get a more reproducible timing.

"},{"location":"test-suite/writing-portable-tests/#as-portable-reframe-test-legacy","title":"Step 3: implementing as a portable ReFrame test without using EESSI_Mixin","text":"

The approach using inheritance from the EESSI_Mixin class, described above, is strongly preferred and recommended. There might be certain tests that do not fit the standardized approach of EESSI_Mixin, but usually that will be solvable by overwriting hooks set by EESSI_Mixin in the inheriting class. In the rare case that your test is so exotic that even this doesn't provide a sensible solution, you can still invoke the hooks used by EESSI_Mixin manually. Note that this used to be the default way of writing tests for the EESSI test suite.

In step 2, there were several system-specific items in the test. In this section, we will show how we use the EESSI hooks to avoid hard-coding system specific information. We do this by replacing the system-specific parts of the test from Step 2 bit by bit. The full final test can be found under tutorials/mpi4py/mpi4py_portable_legacy.py in the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-test-scales-mandatory","title":"Replacing hard-coded test scales (mandatory)","text":"

We replace the hard-coded

    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n

by

from eessi.testsuite.constants import SCALES\n...\n    # ReFrame will generate a test for each scale\n    scale = parameter(SCALES.keys())\n

the SCALES constant contains a set of default scales at which we run all tests. For our mpi4py example, that is sufficient.

Note

It might be that particular tests do not make sense at certain scales. An example is code that only has multithreading, but no multiprocessing support, and is thus only able to run on a single node. In that case, we filter the set of SCALES down to only those where num_nodes = 1, and parameterize the test across those scales:

from eessi.testsuite.constants import SCALES\ndef get_singlenode_scales():\n    \"\"\"\n    Filtering function for single node tests\n    \"\"\"\n    return [\n        k for (k, v) in SCALES.items()\n        if v['num_nodes'] == 1\n    ]\n   ...\n   scale = parameter(get_singlenode_scales())\n

We also replace

    @run_after('init')\n    def define_task_count(self):\n        self.num_tasks = self.scale\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n

by

from eessi.testsuite import hooks\nfrom eessi.testsuite.constants import SCALES, COMPUTE_UNIT, CPU\n    ...\n    @run_after('init')\n    def run_after_init(self):\n        hooks.set_tag_scale(self)\n\n    @run_after('setup')\n    def set_num_tasks_per_node(self):\n        \"\"\" Setting number of tasks per node and cpus per task in this function. This function sets num_cpus_per_task\n        for 1 node and 2 node options where the request is for full nodes.\"\"\"\n        hooks.assign_tasks_per_compute_unit(self, COMPUTE_UNIT[CPU])\n

The first hook (set_tag_scale) sets a number of custom attributes for the current test, based on the scale (self.num_nodes, self.default_num_cpus_per_node, self.default_num_gpus_per_node, self.node_part). These are not used by ReFrame, but can be used by later hooks from the EESSI test suite. It also sets a ReFrame scale tag for convenience. These scale tags are useful for quick test selection, e.g. by running ReFrame with --tag 1_node one would only run the tests generated for the scale 1_node. Calling this hook is mandatory for all tests, as it ensures standardization of tag names based on the scales.

The second hook, assign_tasks_per_compute_unit, is used to set the task count. This hook sets the self.num_tasks and self.num_tasks_per_node we hardcoded before. In addition, it sets the self.num_cpus_per_task. In this case, we call it with the COMPUTE_UNIT[CPU] argument, which means one task will be launched per (physical) CPU available. Thus, for the 1_node scale, this would run the mpi4py test with 128 tasks on a 128-core node, and with 192 tasks on a 192-core node. Check the code for other valid COMPUTE_UNIT's.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-module-names-mandatory","title":"Replacing hard-coded module names (mandatory)","text":"

If we write an mpi4py test, we typically want to run this for all mpi4py modules that are available via our current $MODULEPATH. We do that by replacing

    module_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\n

by using the find_modules utility function:

from eessi.testsuite.utils import find_modules\n...\n    module_name = parameter(find_modules('mpi4py'))\n

We also replace

    @run_after('init')\n    def set_modules(self):\n        self.modules = [self.module_name]\n

by

    @run_after('init')\n    def set_modules(self):\n        hooks.set_modules(self)\n

The set_modules hook assumes that self.module_name has been set, but has the added advantage that a user running the EESSI test suite can overwrite the modules to load from the command line when running ReFrame (see Overriding test parameters).

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-valid_systems-mandatory","title":"Replacing hard-coded valid_systems (mandatory)","text":"

The valid_systems attribute is a mandatory attribute to specify in a ReFrame test. However, we can set it to match any system:

valid_systems = [*]\n

Normally, valid_systems is used as a way of guaranteeing that a system has the necessary properties to run the test. For example, if we know that my_gpu_system has NVIDIA GPUs and I have a test written for NVIDIA GPUs, I would specify valid_systems['my_gpu_system'] for that test. This, however, is a surrogate for declaring what my test needs: I'm saying it needs my_gpu_system, while in fact I could make the more general statement 'this test needs NVIDIA GPUs'.

To keep the test system-agnostic we can declare what the test needs by using ReFrame's concept of partition features (a string) and/or extras (a key-value pair); see the ReFrame documentation on valid_systems. For example, a test could declare it needs the gpu feature. Such a test will only be created by ReFrame for partitions that declare (in the ReFrame configuration file) that they have the gpu feature.

Since features and extras are full text fields, we standardize those in the EESSI test suite in the eessi/testsuite/constants.py file. For example, tests that require an NVIDIA GPU could specify

from eessi.testsuite.constants import FEATURES, GPU, GPU_VENDOR, GPU_VENDORS, NVIDIA\n...\nvalid_systems = f'+{FEATURES[GPU]} %{GPU_VENDOR}={GPU_VENDORS[NVIDIA]}'\n

which makes sure that a test instance is only generated for partitions (as defined in the ReFrame configuration file) that specify that they have the corresponding feature and extras:

from eessi.testsuite.constants import FEATURES, GPU, GPU_VENDOR, GPU_VENDORS, NVIDIA\n...\n'features': [\n     FEATURES[GPU],\n],\n'extras': {\n    GPU_VENDOR: GPU_VENDORS[NVIDIA],\n},\n

In practice, one will rarely hard-code this valid_systems string. Instead, we have a hook filter_valid_systems_by_device_type. It does the above, and a bit more: it also checks if the module that the test is generated for is CUDA-enabled (in case of a test for NVIDIA GPUs), and only then will it generate a GPU-based test. Calling this hook is mandatory for all tests (even if just to declare they need a CPU to run).

Another aspect is that not all ReFrame partitions may be able to run tests of all of the standard SCALES. Each ReFrame partition must add the subset of SCALES it supports to its list of features. A test case can declare it needs a certain scale. For example, a test case using the 16_nodes scale needs a partition with at least 16 nodes. The filter_supported_scales hook then filters out all partitions that do not support running jobs on 16 nodes. Calling this hook is also mandatory for all tests.

There may be other hooks that facilitate valid system selection for your tests, but please check the code for a full list.

"},{"location":"test-suite/writing-portable-tests/#requesting-sufficient-memory-mandatory","title":"Requesting sufficient memory (mandatory)","text":"

When developing the test, we don't know how much memory the node will have on which it will run. However, we do know how much our application needs.

We can declare this need using the req_memory_per_node hook. This hook is mandatory for all tests. If you are on a system with a scheduler that runs jobs within a cgroup and where you can use mpirun or srun as the parallel launcher command in the ReFrame configuration, getting the memory consumption is easy. You can (temporarily) add a postrun_cmds the following to the class body of your test that extracts the maximum memory that was used within your cgroup. For cgroups v1, the syntax would be:

   # Temporarily define postrun_cmds to make it easy to find out memory usage\n    postrun_cmds = ['MAX_MEM_IN_BYTES=$(</sys/fs/cgroup/memory/$(</proc/self/cpuset)/../memory.max_usage_in_bytes)', 'echo \"MAX_MEM_IN_MIB=$(($MAX_MEM_IN_BYTES/1048576))\"']\n

For cgroups v2, the syntax would be:

   # Temporarily define postrun_cmds to make it easy to find out memory usage\n   postrun_cmds = ['MAX_MEM_IN_BYTES=$(</sys/fs/cgroup/$(</proc/self/cpuset)/../../../memory.peak)', 'echo \"MAX_MEM_IN_MIB=$(($MAX_MEM_IN_BYTES/1048576))\"']\n

And define an additional performance_function:

    @performance_function('MiB')\n    def max_mem_in_mib(self):\n        return sn.extractsingle(r'^MAX_MEM_IN_MIB=(?P<perf>\\S+)', self.stdout, 'perf', int)\n

This results in the following output on 192-core nodes (we've omitted some output for readability):

[----------] start processing checks\n[       OK ] ( 1/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=16_nodes /38aea144 @snellius:genoa+default\nP: max_mem_in_mib: 22018 MiB (r:0, l:None, u:None)\n[       OK ] ( 2/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=8_nodes /bfc4d3d4 @snellius:genoa+default\nP: max_mem_in_mib: 21845 MiB (r:0, l:None, u:None)\n[       OK ] ( 3/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_nodes /8de369bc @snellius:genoa+default\nP: max_mem_in_mib: 21873 MiB (r:0, l:None, u:None)\n[       OK ] ( 4/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_nodes /364146ba @snellius:genoa+default\nP: max_mem_in_mib: 21800 MiB (r:0, l:None, u:None)\n[       OK ] ( 5/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_node /8225edb3 @snellius:genoa+default\nP: max_mem_in_mib: 21666 MiB (r:0, l:None, u:None)\n[       OK ] ( 6/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_2_node /4acf483a @snellius:genoa+default\nP: max_mem_in_mib: 10768 MiB (r:0, l:None, u:None)\n[       OK ] ( 7/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_4_node /fc3d689b @snellius:genoa+default\nP: max_mem_in_mib: 5363 MiB (r:0, l:None, u:None)\n[       OK ] ( 8/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_8_node /73046a73 @snellius:genoa+default\nP: max_mem_in_mib: 2674 MiB (r:0, l:None, u:None)\n[       OK ] ( 9/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_4nodes /f08712a2 @snellius:genoa+default\nP: max_mem_in_mib: 210 MiB (r:0, l:None, u:None)\n[       OK ] (10/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_2nodes /23cd550b @snellius:genoa+default\nP: max_mem_in_mib: 209 MiB (r:0, l:None, u:None)\n[       OK ] (11/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_cores /bb8e1349 @snellius:genoa+default\nP: max_mem_in_mib: 753 MiB (r:0, l:None, u:None)\n[       OK ] (12/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_cores /4c0c7c9e @snellius:genoa+default\nP: max_mem_in_mib: 403 MiB (r:0, l:None, u:None)\n[       OK ] (13/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_core /aa83ba9e @snellius:genoa+default\nP: max_mem_in_mib: 195 MiB (r:0, l:None, u:None)\n

If you are not on a system where your scheduler runs jobs in cgroups, you will have to figure out the memory consumption in another way (e.g. by checking memory usage in top while running the test).

We now have a pretty good idea how the memory per node scales: for our smallest process count (1 core), it's about 200 MiB per process, while for our largest process count (16 nodes, 16*192 processes), it's 22018 MiB per node (or about 115 MiB per process). If we wanted to do really well, we could define a linear function (with offset) and fit it through the data (and round up to be on the safe side, i.e. make sure there is enough memory). Then, we could call the hook like this:

@run_after('setup')\ndef request_mem(self):\n    mem_required = self.num_tasks_per_node * mem_slope + mem_intercept\n    hooks.req_memory_per_node(self, app_mem_req=mem_required)\n

In this case, however, the memory consumption per process is low enough that we don't have go through that effort, and generously request 256 MiB per task that is launched on a node. Thus, we call our hook using:

@run_after('setup')\ndef request_mem(self):\n    mem_required = self.num_tasks_per_node * 256\n    hooks.req_memory_per_node(self, app_mem_req=mem_required)\n
Note that requesting too high an amount of memory means the test will be skipped on nodes that cannot meet that requirement (even if they might have been able to run it without actually running out of memory). Requesting too little will risk nodes running out of memory while running the test. Note that many HPC systems have an amount memory of around 1-2 GB/core. It's good to ensure (if you can) that the memory requests for all valid SCALES for your test do not exceed the total amount of memory available on typical nodes.

"},{"location":"test-suite/writing-portable-tests/#requesting-taskprocessthread-binding-recommended","title":"Requesting task/process/thread binding (recommended)","text":"

Binding processes to a set of cores prevents the OS from migrating such processes to other cores. Especially on multi-socket systems, process migration can cause performance hits, especially if a process is moved to a CPU core on the other socket. Since this is controlled by the OS, and dependent on what other processes are running on the node, it may cause unpredictable performance: in some runs, processes might be migrated, while in others, they aren't.

Thus, it is typically better for reproducibility to bind processes to their respective set of cores. The set_compact_process_binding hook can do this for you:

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_process_binding(self)\n

For pure MPI codes, it will bind rank 0 to core 0, rank 1 to core 1, etc. For hybrid codes (MPI + OpenMP, or otherwise codes that do both multiprocessing and multithreading at the same time), it will bind to consecuitive sets of cores. E.g. if a single process uses 4 cores, it will bind rank 0 to cores 0-3, rank 1 to cores 4-7, etc.

To impose this binding, the hook sets environment variables that should be respected by the parallel launcher used to launch your application. Check the code to see which parallel launchers are currently supported. The use of this hook is optional, but generally recommended for all multiprocessing codes.

For multithreading codes, there set_compact_thread_binding hook is an equivalent hook that can do thread binding, if supported multithreading frameworks are used (e.g. Intel or GNU OpenMP, see the code for all supported frameworks):

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_thread_binding(self)\n

The use of this hook is optional but recommended in most cases. Note that thread binding can sometimes cause unwanted behaviour: even if e.g. 8 cores are allocated to the process and 8 threads are launched, we have seen codes that bind all those threads to a single core (e.g. core 0) when core binding is enabled. Please verify that enabling core binding does not introduce any unwanted binding behaviour for your code.

"},{"location":"test-suite/writing-portable-tests/#defining-omp_num_threads-recommended","title":"Defining OMP_NUM_THREADS (recommended)","text":"

The set_omp_num_threads hook sets the $OMP_NUM_THREADS environment variable based on the number of cpus_per_task defined in the ReFrame test (which in turn is typically set by the assign_tasks_per_compute_unit hook). For OpenMP codes, it is generally recommended to call this hook, to ensure they launch the correct amount of threads.

"},{"location":"using_eessi/basic_commands/","title":"Basic commands","text":""},{"location":"using_eessi/basic_commands/#basic-commands-to-access-software-provided-via-eessi","title":"Basic commands to access software provided via EESSI","text":"

EESSI provides software through environment module files and Lmod.

To see which modules (and extensions) are available, run:

module avail\n

Below is a short excerpt of the output produced by module avail, showing 10 modules only.

   PyYAML/5.3-GCCcore-9.3.0\n   Qt5/5.14.1-GCCcore-9.3.0\n   Qt5/5.15.2-GCCcore-10.3.0                               (D)\n   QuantumESPRESSO/6.6-foss-2020a\n   R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n   R/4.0.0-foss-2020a\n   R/4.1.0-foss-2021a                                      (D)\n   re2c/1.3-GCCcore-9.3.0\n   re2c/2.1.1-GCCcore-10.3.0                               (D)\n   RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n

Load modules with module load package/version, e.g., module load R/4.1.0-foss-2021a, and try out the software. See below for a short session

[EESSI 2023.06] $ module load R/4.1.0-foss-2021a\n[EESSI 2021.06] $ which R\n/cvmfs/software.eessi.io/versions/2021.12/software/linux/x86_64/intel/skylake_avx512/software/R/4.1.0-foss-2021a/bin/R\n[EESSI 2023.06] $ R --version\nR version 4.1.0 (2021-05-18) -- \"Camp Pontanezen\"\nCopyright (C) 2021 The R Foundation for Statistical Computing\nPlatform: x86_64-pc-linux-gnu (64-bit)\n\nR is free software and comes with ABSOLUTELY NO WARRANTY.\nYou are welcome to redistribute it under the terms of the\nGNU General Public License versions 2 or 3.\nFor more information about these matters see\nhttps://www.gnu.org/licenses/.\n
"},{"location":"using_eessi/building_on_eessi/","title":"Building software on top of EESSI","text":""},{"location":"using_eessi/building_on_eessi/#building-software-on-top-of-eessi-with-easybuild","title":"Building software on top of EESSI with EasyBuild","text":"

Building on top of EESSI with EasyBuild is relatively straightforward. One crucial feature is that EasyBuild supports building against operating system libraries that are not in a standard prefix (such as /usr/lib). This is required when building against EESSI, since all of the software in EESSI is built against the compatibility layer.

"},{"location":"using_eessi/building_on_eessi/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"

Start your environment as described here

"},{"location":"using_eessi/building_on_eessi/#using-the-eessi-extend-module","title":"Using the EESSI-extend module","text":"

The EESSI-extend module facilitates building on top of EESSI using EasyBuild. It does a few key things:

  1. It configures EasyBuild to match how the rest of the EESSI software is built
  2. It configures EasyBuild to use a certain installation path (e.g. in your homedir), taking into account the hardware architecture you are building on
  3. It adds the relevant subdirectory from your installation path to your MODULEPATH, to make sure your newly installed modules are available
  4. It loads the EasyBuild module

The EESSI-extend module recognizes a few environment variables. To print an up-to-date list, check the module itself

module help EESSI-extend/2023.06-easybuild\n

The installation prefix is determined by EESSI-extend through the following logic:

  1. If $EESSI_CVMFS_INSTALL is set, software is installed in $EESSI_SOFTWARE_PATH. This variable shouldn't be used by users and would only be used by CVMFS administrators of the EESSI repository.
  2. If $EESSI_SITE_INSTALL is set, the EESSI site installation prefix ($EESSI_SITE_SOFTWARE_PATH) will be used. This is typically where sites hosting a system that has EESSI deployed would install additional software on top of EESSI and make it available to all their users.
  3. If $EESSI_PROJECT_INSTALL is set (and $EESSI_USER_INSTALL is not set), this prefix will be used. You should use this if you want to install additional software on top of EESSI that should also be usable by your project partners on the same system. For example, if you have a project space at /project/my_project that all your project partners can access, you could set export EESSI_PROJECT_INSTALL=/project/my_project/eessi. Make sure that this directory has the SGID permission set (chmod g+s $EESSI_PROJECT_INSTALL). This way, all the additional installations done with EESSI-extend will be put in that prefix, and will get the correct UNIX file permissions so that all your project partners can access it.
  4. If $EESSI_USER_INSTALL is set, this prefix will be used. You should use this if you want to install additional software on top of EESSI just for your own user. For example, you could set export EESSI_USER_INSTALL=$HOME/my/eessi/extend/prefix, and EESSI-extend will install all software in this prefix. Unix file permissions will be set such that these installations will be readable only to the user.

If none of the above apply, the default is a user installation in $HOME/EESSI (i.e. effectively the same as setting EESSI_USER_INSTALL=$HOME/EESSI).

Here, we assume you are just an end-user, not having set any of the above environment variables, and loading the EESSI-extend module with the default installation prefix:

module load EESSI-extend/2023.06-easybuild\n

Now, if we check the EasyBuild configuration

eb --show-config\nallow-loaded-modules (E) = EasyBuild, EESSI-extend\nbuildpath            (E) = /tmp/<user>/easybuild/build\ncontainerpath        (E) = /tmp/<user>/easybuild/containers\ndebug                (E) = True\nexperimental         (E) = True\nfilter-deps          (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars      (E) = LD_LIBRARY_PATH\nhooks                (E) = /cvmfs/software.eessi.io/versions/2023.06/init/easybuild/eb_hooks.py\nignore-osdeps        (E) = True\ninstallpath          (E) = /home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2\nmodule-extensions    (E) = True\npackagepath          (E) = /tmp/<user>/easybuild/packages\nprefix               (E) = /tmp/<user>/easybuild\nread-only-installdir (E) = True\nrepositorypath       (E) = /tmp/<user>/easybuild/ebfiles_repo\nrobot-paths          (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/software/EasyBuild/4.9.4/easybuild/easyconfigs\nrpath                (E) = True\nsourcepath           (E) = /tmp/<user>/easybuild/sources\nsticky-bit           (E) = True\nsysroot              (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64\ntrace                (E) = True\numask                (E) = 077\nzip-logs             (E) = bzip2\n

Apart from the installpath, this is exactly how EasyBuild is configured when software is built for EESSI itself.

Note

Be aware that EESSI-extend will optimize the installation for your current hardware architecture, and the installpath also contains this architecture in it's directory structure (just like regular EESSI installations do). This means you should run the installation on the node type on which you also want to use the software. If you want the installation to be present for multiple node types, you can simply run it once on each type of node.

And, if we check our MODULEPATH, we see that the installpath that EasyBuild will use here is prepended

$ echo $MODULEPATH\n/home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all:...\n

"},{"location":"using_eessi/building_on_eessi/#building","title":"Building","text":"

Now, you are ready to build. For example, suppose you want to install netcdf4-python-1.6.5-foss-2023b.eb (which is not present at the time of writing), you run:

eb netcdf4-python-1.6.5-foss-2023b.eb\n

Note

If this netCDF for python module is available by the time you are trying, you can force a local rebuild by adding the --rebuild argument in order to experiment with building locally, or pick a different EasyConfig to build.

"},{"location":"using_eessi/building_on_eessi/#using-the-newly-built-module","title":"Using the newly built module","text":"

If the installation was done in the site installation path (i.e. EESSI_SITE_INSTALL was set, and things were installed in /cvmfs/software.eessi.io/host_injections/...), the modules are available by default to anyone who has initialized the EESSI software environment.

If the installation through EESSI-extend was done in a EESSI_PROJECT_INSTALL or EESSI_USER_INSTALL location, one has to make sure to load the EESSI-extend module before loading the module of interest, since this adds those prefixes to the MODULEPATH.

If we don't have the EESSI-extend module loaded, it will not find any modules installed in the EESSI_PROJECT_INSTALL or EESSI_USER_INSTALL locations:

$ module unload EESSI-extend\n$ module av netcdf4-python/1.6.5-foss-2023b\nNo module(s) or extension(s) found!\n

But, if we load EESSI-extend first:

$ module load EESSI-extend/2023.06-easybuild\n$ module av netcdf4-python/1.6.5-foss-2023b\n\n---- /home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all ----\n   netcdf4-python/1.6.5-foss-2023b\n

This means you'll always need to load the EESSI-extend module if you want to use these modules (also, and particularly when you want to use them in a job script).

"},{"location":"using_eessi/building_on_eessi/#manually-building-software-op-top-of-eessi-without-easybuild","title":"Manually building software op top of EESSI (without EasyBuild)","text":"

Warning

We are working on a module file that should make building on top of EESSI (without using EasyBuild) more straightforward, particularly when using Autotools or CMake. Right now, it is a little convoluted and requires you to have a decent grasp of * What a runtime dynamic linker (ld-linux*.so) is and does * How to influence the behaviour of the runtime linker with LD_LIBRARY_PATH * The difference between LIBRARY_PATH and LD_LIBRARY_PATH

As such, this documentation is intended for \"experts\" in the runtime linker and it's behaviour, and most cases are untested. Any feedback on this topic is highly appreciated.

Building and running software on top of EESSI without EasyBuild is not straightforward and requires some considerations to take care of.

It is expected that you will have loaded all of your required dependencies as modules from the EESSI environment. Since EESSI sets LIBRARY_PATH for all of the modules and the GCC compiler is configured to use the compat layer, there should be no additional configuration required to execute a standard build process. On the other hand, EESSI does not set LD_LIBRARY_PATH so, at runtime, the executable will need help finding the libraries that it needs to actually execute. The easiest way to circumvent this requirement is by setting the environment variable LD_RUN_PATH during compile time as well. With LD_RUN_PATH set, the program will be able to tell the dynamic linker to search in those paths when the program is being executed.

EESSI uses a compatibility layer to ensure that it takes as few libraries from the host as possible. The safest way to make sure all libraries will point to the required locations in the compatibility layer (and do not leak in from the host operating system) is starting an EESSI prefix shell before building. To do this:

!!! Note RPATH should never point to a compatibility layer directory, only to software layer ones, as all resolving is done via the runtime linker (ld-linux*.so) that is shipped with EESSI, which automatically searches these locations.

The biggest downside of this approach is that your executable becomes bound to the architecture you linked your libraries for, i.e., if you add to your executable RPATH a libhdf5.socompiled for intel_avx512, you will not be able to run that binary on a machine with a different architecture. If this is an issue for you, you should look into how EESSI itself organises the location of binaries and perhaps leverage the relevant environment variables (e.g., EESSI_SOFTWARE_SUBDIR).

"},{"location":"using_eessi/eessi_demos/","title":"Running EESSI demos","text":"

To really experience how using EESSI can significantly facilitate the work of researchers, we recommend running one or more of the EESSI demos.

First, clone the eessi-demo Git repository, and move into the resulting directory:

git clone https://github.com/EESSI/eessi-demo.git\ncd eessi-demo\n

The contents of the directory should be something like this:

$ ls -l\ntotal 48\ndrwxrwxr-x 2 example users  4096 May 15 13:26 Bioconductor\ndrwxrwxr-x 2 example users  4096 May 15 13:26 ESPResSo\ndrwxrwxr-x 2 example users  4096 May 15 13:26 GROMACS\n-rw-rw-r-- 1 example users 18092 Dec  5  2022 LICENSE\ndrwxrwxr-x 2 example users  4096 May 15 13:26 OpenFOAM\n-rw-rw-r-- 1 example users   543 May 15 13:26 README.md\ndrwxrwxr-x 3 example users  4096 May 15 13:26 scripts\ndrwxrwxr-x 2 example users  4096 May 15 13:26 TensorFlow\n

The directories we care about are those that correspond to particular scientific software, like Bioconductor, GROMACS, OpenFOAM, TensorFlow, ...

Each of these contains a run.sh script that can be used to start a small example run with that software. Every example takes a couple of minutes to run, even with limited resources only.

"},{"location":"using_eessi/eessi_demos/#example-running-tensorflow","title":"Example: running TensorFlow","text":"

Let's try running the TensorFlow example.

First, we need to make sure that our environment is set up to use EESSI:

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n

Change to the TensorFlow subdirectory of the eessi-demo Git repository, and execute the run.sh script:

[EESSI 2023.06] $ cd TensorFlow\n[EESSI 2023.06] $ ./run.sh\n

Shortly after starting the script you should see output as shown below, which indicates that GROMACS has started running:

Epoch 1/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.2983 - accuracy: 0.9140\nEpoch 2/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.1444 - accuracy: 0.9563\nEpoch 3/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.1078 - accuracy: 0.9670\nEpoch 4/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.0890 - accuracy: 0.9717\nEpoch 5/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.0732 - accuracy: 0.9772\n313/313 - 0s - loss: 0.0679 - accuracy: 0.9790 - 391ms/epoch - 1ms/step\n\nreal   1m24.645s\nuser   0m16.467s\nsys    0m0.910s\n
"},{"location":"using_eessi/eessi_in_ci/","title":"Leveraging EESSI for Continuous Integration","text":"

EESSI is already available as both a GitHub Action and a GitLab CI/CD component, which means you can easily integrate it if you use continuous integration within those ecosystems.

Note

Both of these EESSI CI tools support the use of direnv to allow you to store your desired environment within a .envrc file within your repository. See the documentation of the individual tools for detailed usage.

"},{"location":"using_eessi/eessi_in_ci/#the-eessi-github-action","title":"The EESSI GitHub Action","text":"

The EESSI GitHub Action can be found on the GitHub Marketplace, at https://github.com/marketplace/actions/eessi. Below is a minimal example of how to leverage the action, for detailed usage please refer to the official action documentation.

name: Minimal usage\non: [push, pull_request]\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: eessi/github-action-eessi@v3\n    - name: Test EESSI\n      run: |\n        module avail\n      shell: bash\n
"},{"location":"using_eessi/eessi_in_ci/#the-eessi-gitlab-cicd-component","title":"The EESSI GitLab CI/CD component","text":"

The EESSI GitLab CI/CD component can be found in the GitLab CI/CD Catalog, at https://gitlab.com/explore/catalog/eessi/gitlab-eessi. Below is a minimal example of how to leverage the component, for detailed usage please refer to the official component documentation.

include:\n  - component: $CI_SERVER_FQDN/eessi/gitlab-eessi/eessi@1.0.5\n\nbuild:\n  stage: build\n  script:\n    - module spider GROMACS\n
"},{"location":"using_eessi/setting_up_environment/","title":"Setting up your environment","text":"

In Unix-like systems, environment variables are used to configure the environment in which applications and scripts run. To set up EESSI, you need to configure a specific set of environment variables so that your operating system is aware that EESSI exists and is to be used. We have prepared a few automated approaches that do this for you: you can either load an EESSI environment module or source an initialisation script for bash.

With any of the approaches below, the first time you use them they may seem to take a while as any necessary data is downloaded in the background from a Stratum 1 server (which is part of the CernVM-FS infrastructure used to distribute files for EESSI).

"},{"location":"using_eessi/setting_up_environment/#loading-an-eessi-environment-module","title":"Loading an EESSI environment module","text":"

There are a few different scenarios where you may want to set up the EESSI environment by loading an EESSI environment module. The simplest scenario is one where you do not already have a environment module tool on your system, in this case we configure the Lmod module tool shipped with EESSI and automatically load the EESSI environment module:

source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n
This command configures Lmod for your system and automatically loads the EESSI module so that EESSI is immediately available to use. If you would like to see what environment variables the module sets, you can use module show EESSI.

Your environment is now set up, you are ready to start running software provided by EESSI!

What if I don't use a bash shell?

The example above is shown for a bash shell but the environment module approach supports all the shells that Lmod itself supports (bash, csh, fish, ksh, zsh):

source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/csh\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/fish\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/ksh\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/zsh\n

What if I already have Lmod installed or another module tool is available on the system?

You can check if the module command is already defined for your system and what version it has with

command -v module && module --version\n

  1. If you are already using Lmod (modules based on Lua) with version >= 8.6:

    In this case, we recommend resetting $MODULEPATH, because EESSI is not designed to mix modules coming from EESSI and from your system.

    module unuse $MODULEPATH\nmodule use /cvmfs/software.eessi.io/init/modules\nmodule load EESSI/2023.06\n

    Your environment is now set up, you are ready to start running software provided by EESSI!

  2. If you are using an Lmod with a version older than 8.6 or any other module tool utilizing MODULEPATH (e.g., Tcl-based Environment Modules):

    It is recommended to unset $MODULEPATH to prevent Lmod from attempting to build a cache for your module tree (as this can be very slow if you have a lot of modules). Again, unsetting the $MODULEPATH should be considered as a good idea in general so you do not mix local and EESSI modules. You then will need to initialise a compatible version of Lmod, for example the one shipped with EESSI:

    unset MODULEPATH\nsource /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n

    Your environment is now set up, you are ready to start running software provided by EESSI!

Why do we recommend to unset MODULEPATH?

Unsetting the $MODULEPATH environment variable, which tells Lmod in which directories environment module files are available, may be necessary. The underlying reason to suggest this is that EESSI and your system are most likely based on two different operating system distributions - EESSI uses it's compatibility layer, your system almost certainly uses some other Linux distribution. If you can find a way to ensure that the software stacks from your site and EESSI do not mix (in particular when someone is building new software!), then this should be good enough.

"},{"location":"using_eessi/setting_up_environment/#sourcing-the-eessi-bash-initialisation-script","title":"Sourcing the EESSI bash initialisation script","text":"

This is supported exclusively for bash shell users. If you're using a different shell, please use the alternative approach

You can to see what your current shell is with the command echo $SHELL

You can initialise EESSI (in a non-reversible way) by running the command:

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n

You should see the following output:

Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\narchdetect says x86_64/amd/zen2  # (1)\narchdetect could not detect any accelerators\nUsing x86_64/amd/zen2 as software subdirectory.\nFound Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/SitePackage.lua\nUsing /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2 as the site extension directory for installations.\nUsing /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all as the site extension directory to be added to MODULEPATH.\nFound libcurl CAs file at RHEL location, setting CURL_CA_BUNDLE\nInitializing Lmod...\nPrepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} [user@system ~]$  # (2)!\n

What is reported at (1) depends on the CPU architecture of the machine you are running the source command.

At (2) is the prompt indicating that you have access to the EESSI software stack.

Your environment is now set up, you are ready to start running software provided by EESSI!

"},{"location":"blog/archive/2024/","title":"2024","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to the EESSI project documentation!","text":"

Quote

What if there was a way to avoid having to install a broad range of scientific software from scratch on every HPC cluster or cloud instance you use or maintain, without compromising on performance?

The European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is a collaboration between different European partners in HPC community. The goal of this project is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure.

"},{"location":"#quick-links","title":"Quick links","text":"

For users:

For system administrators:

For contributors:

The EESSI project was covered during a quick AWS HPC Tech Short video (15 June 2023):

"},{"location":"bot/","title":"Build-test-deploy bot","text":"

Building, testing, and deploying software is done by one or more bot instances.

The EESSI build-test-deploy bot is implemented as a GitHub App in the eessi-bot-software-layer repository.

It operates in the context of pull requests to the compatibility-layer repository or the software-layer repository, and follows the instructions supplied by humans, so the procedure of adding software to EESSI is semi-automatic.

It leverages the scripts provided in the bot/ subdirectory of the target repository (see for example here), like bot/build.sh to build software, and bot/check-result.sh to check whether the software was built correctly.

"},{"location":"bot/#high-level-design","title":"High-level design","text":"

The bot consists of two components: the event handler, and the job manager.

"},{"location":"bot/#event-handler","title":"Event handler","text":"

The bot event handler is responsible for handling GitHub events for the GitHub repositories it is registered to.

It is triggered for every event that it receives from GitHub. Most events are ignored, but specific events trigger the bot to take action.

Examples of actionable events are submitting of a comment that starts with bot:, which may specify an instruction for the bot like building software, or adding a bot:deploy label (see deploying).

"},{"location":"bot/#job-manager","title":"Job manager","text":"

The bot job manager is responsible for monitoring the queued and running jobs, and reporting back when jobs completed.

It runs every couple of minutes as a cron job.

"},{"location":"bot/#basics","title":"Basics","text":"

Instructions for the bot should always start with bot:.

To get help from the bot, post a comment with bot: help.

To make the bot report how it is configured, post a comment with bot: show_config.

"},{"location":"bot/#permissions","title":"Permissions","text":"

The bot is configured to only act on instructions issued by specific GitHub accounts.

There are separate configuration options for allowing to send instructions to the bot, to trigger building of software, and to deploy software installations in to the EESSI repository.

Note

Ask for help in the #software-layer-bot channel of the EESSI Slack if needed!

"},{"location":"bot/#building","title":"Building","text":"

To instruct the bot to build software, one or more build instructions should be issued by posting a comment in the pull request (see also here).

The most basic build instruction that can be sent to the bot is:

bot: build\n

Warning

Only use bot: build if you are confident that it is OK to do so.

Most likely, you want to supply one or more filters to avoid that the bot builds for all its configurations.

"},{"location":"bot/#filters","title":"Filters","text":"

Build instructions can include filters that are applied by each bot instance to determine which builds should be executed, based on:

Note

Use : as separator to specify a value for a particular filter, do not add spaces after the :.

The bot recognizes shorthands for the supported filters, so you can use inst:... instead of instance:..., repo:... instead of repository:..., and arch:... instead of architecture:....

"},{"location":"bot/#combining-filters","title":"Combining filters","text":"

You can combine multiple filters in a single build instruction. Separate filters with a space, order of filters does not matter.

For example:

bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen2\n
"},{"location":"bot/#multiple-build-instructions","title":"Multiple build instructions","text":"

You can issue multiple build instructions in a single comment, even across multiple bot instances, repositories, and CPU targets. Specify one build instruction per line.

For example:

bot: build repo:eessi-hpc.org-2023.06-software arch:x86_64/amd/zen3 inst:aws\nbot: build repo:eessi-hpc.org-2023.06-software arch:aarch64/generic inst:azure\n

Note

The bot applies the filters with partial matching, which you can use to combine multiple build instructions into a single one.

For example, if you only want to build for all aarch64 CPU targets, you can use arch:aarch64 as filter.

The same applies to the instance and repository filters.

"},{"location":"bot/#behind-the-scenes","title":"Behind-the-scenes","text":""},{"location":"bot/#processing-build-instructions","title":"Processing build instructions","text":"

When the bot receives build instructions through a comment in a pull request, they are processed by the event handler component. It will:

1) Combine its active configuration (instance name, repositories, supported CPU targets) and the build instructions to prepare a list of jobs to submit;

2) Create a working directory for each job, including a Slurm job script that runs the bot/build.sh script in the context of the changes proposed in the pull request to build the software, and runs bot/check-result.sh script at the end to check whether the build was successful;

3) Submit each prepared job to a workernode that can build for the specified CPU target, and put a hold on it.

"},{"location":"bot/#managing-build-jobs","title":"Managing build jobs","text":"

During the next iteration of the job manager, the submitted jobs are released and queued for execution.

The job manager also monitors the running jobs at regular intervals, and reports back in the pull request when a job has completed. It also reports the result (SUCCESS or FAILURE ), based on the result of the bot/check-result.sh script.

"},{"location":"bot/#artefacts","title":"Artefacts","text":"

If all goes well, each job should produce a tarball as an artefact, which contains the software installations and the corresponding environment module files.

The message reported by the job manager provides an overview of the contents of the artefact, which was created by the bot/check-result.sh script.

"},{"location":"bot/#testing","title":"Testing","text":"

Warning

The test phase is not implemented yet in the bot.

We intend to use the EESSI test suite in different OS configurations to verify that the software that was built works as expected.

"},{"location":"bot/#deploying","title":"Deploying","text":"

To deploy the artefacts that were obtained in the build phase, you should add the bot: deploy label to the pull request.

This will trigger the event handler to upload the artefacts for ingestion into the EESSI repository.

"},{"location":"bot/#behind-the-scenes_1","title":"Behind-the-scenes","text":"

The current setup for the software-layer repository, is as follows:

"},{"location":"compatibility_layer/","title":"Compatibility layer","text":"

The middle layer of the EESSI project is the compatibility layer, which ensures that our scientific software stack is compatible with different client operating systems (different Linux distributions, macOS and even Windows via WSL).

For this we rely on Gentoo Prefix, by installing a limited set of Gentoo Linux packages in a non-standard location (a \"prefix\"), using Gentoo's package manager Portage.

The compatible layer is maintained via our https://github.com/EESSI/compatibility-layer GitHub repository.

"},{"location":"contact/","title":"Contact info","text":"

For more information:

"},{"location":"filesystem_layer/","title":"Filesystem layer","text":""},{"location":"filesystem_layer/#cernvm-file-system-cernvm-fs","title":"CernVM File System (CernVM-FS)","text":"

The bottom layer of the EESSI project is the filesystem layer, which is responsible for distributing the software stack.

For this we rely on CernVM-FS (or CVMFS for short), a network file system used to distribute the software to the clients in a fast, reliable and scalable way.

CVMFS was created over 10 years ago specifically for the purpose of globally distributing a large software stack. For the experiments at the Large Hadron Collider, it hosts several hundred million files and directories that are distributed to the order of hundred thousand client computers.

The hierarchical structure with multiple caching layers (Stratum-0, Stratum-1's located at partner sites and local caching proxies) ensures good performance with limited resources. Redundancy is provided by using multiple Stratum-1's at various sites. Since CVMFS is based on the HTTP protocol, the ubiquitous Squid caching proxy can be leveraged to reduce server loads and improve performance at large installations (such as HPC clusters). Clients can easily mount the file system (read-only) via a FUSE (Filesystem in Userspace) module.

For a (basic) introduction to CernVM-FS, see this presentation.

Detailed information about how we configure CVMFS is available at https://github.com/EESSI/filesystem-layer.

"},{"location":"filesystem_layer/#eessi-infrastructure","title":"EESSI infrastructure","text":"

For both the pilot and production repositories, EESSI hosts a CernVM-FS Stratum 0 and a number of public Stratum 1 servers. Client systems using EESSI by default connect against the public EESSI CernVM-FS Stratum 1 servers. The status of the infrastructure for the pilot repository is displayed at http://status.eessi-infra.org, while for the production repository it is displayed at https://status.eessi.io.

"},{"location":"governance/","title":"EESSI Governance","text":"

EESSI recognises that formal governance is essential given the ambitions of the project, not just for EESSI itself but also to those who would adopt EESSI and/or fund its development.

EESSI is, therefore, in the process of adopting a formal governance model. To facilitate this process it has created an Interim Steering Committee whose role is to progress this adoption while also providing direction to the project.

"},{"location":"governance/#members-of-the-interim-steering-committee","title":"Members of the Interim Steering Committee","text":"

The members of the Interim Steering Committee are listed below. Each member of the Interim Steering Committee also nominate an alternate should they not be able to attend a meeting of the committee.

"},{"location":"meetings/","title":"Meetings","text":""},{"location":"meetings/#monthly-meetings-online","title":"Monthly meetings (online)","text":"

Online EESSI update meeting, every 1st Thursday of the month at 14:00 CE(S)T.

More info can be found on the EESSI wiki.

"},{"location":"meetings/#physical-meetings","title":"Physical meetings","text":""},{"location":"meetings/#physical-meetings-archive","title":"Physical meetings (archive)","text":""},{"location":"meetings/#2020","title":"2020","text":""},{"location":"meetings/#2019","title":"2019","text":""},{"location":"overview/","title":"Overview of the EESSI project","text":""},{"location":"overview/#scope-goals","title":"Scope & Goals","text":"

Through the EESSI project, we want to set up a shared stack of scientific software installations, and by doing so avoid a lot of duplicate work across HPC sites.

For end users, we want to provide a uniform user experience with respect to available scientific software, regardless of which system they use.

Our software stack should work on laptops, personal workstations, HPC clusters and in the cloud, which means we will need to support different CPUs, networks, GPUs, and so on. We hope to make this work for any Linux distribution and maybe even macOS and Windows via WSL, and a wide variety of CPU architectures (Intel, AMD, ARM, POWER, RISC-V).

Of course we want to focus on the performance of the software, but also on automating the workflow for maintaining the software stack, thoroughly testing the installations, and collaborating efficiently.

"},{"location":"overview/#inspiration","title":"Inspiration","text":"

The EESSI concept is heavily inspired by Compute Canada software stack, which is a shared software stack used on all 5 major national systems in Canada and a bunch of smaller ones.

The design of the Compute Canada software stack is discussed in detail in the PEARC'19 paper \"Providing a Unified Software Environment for Canada\u2019s National Advanced Computing Centers\".

It has also been presented at the 5th EasyBuild User Meetings (slides, recorded talk), and is well documented.

"},{"location":"overview/#layered-structure","title":"Layered structure","text":"

The EESSI project consists of 3 layers.

The bottom layer is the filesystem layer, which is responsible for distributing the software stack across clients.

The middle layer is a compatibility layer, which ensures that the software stack is compatible with multiple different client operating systems.

The top layer is the software layer, which contains the actual scientific software applications and their dependencies.

The host OS still provides a couple of things, like drivers for network and GPU, support for shared filesystems like GPFS and Lustre, a resource manager like Slurm, and so on.

"},{"location":"overview/#opportunities","title":"Opportunities","text":"

We hope to collaborate with interested parties across the HPC community, including HPC centres, vendors, consultancy companies and scientific software developers.

Through our software stack, HPC users can seamlessly hop between sites, since the same software is available everywhere.

We can leverage each others work with respect to providing tested and properly optimized scientific software installations more efficiently, and provide a platform for easy benchmarking of new systems.

By working together with the developers of scientific software we can provide vetted installations for the broad HPC community.

"},{"location":"overview/#challenges","title":"Challenges","text":"

There are many challenges in an ambitious project like this, including (but probably not limited to):

"},{"location":"overview/#current-status","title":"Current status","text":"

(June 2020)

We are actively working on the EESSI repository, and are organizing monthly meetings to discuss progress and next steps forward.

Keep an eye on our GitHub repositories at https://github.com/EESSI and our Twitter feed.

"},{"location":"partners/","title":"Project partners","text":""},{"location":"partners/#delft-university-of-technology-the-netherlands","title":"Delft University of Technology (The Netherlands)","text":""},{"location":"partners/#dell-technologies-europe","title":"Dell Technologies (Europe)","text":""},{"location":"partners/#eindhoven-university-of-technology","title":"Eindhoven University of Technology","text":""},{"location":"partners/#ghent-university-belgium","title":"Ghent University (Belgium)","text":""},{"location":"partners/#hpcnow-spain","title":"HPCNow! (Spain)","text":""},{"location":"partners/#julich-supercomputing-centre-germany","title":"J\u00fclich Supercomputing Centre (Germany)","text":""},{"location":"partners/#university-of-cambridge-united-kingdom","title":"University of Cambridge (United Kingdom)","text":""},{"location":"partners/#university-of-groningen-the-netherlands","title":"University of Groningen (The Netherlands)","text":""},{"location":"partners/#university-of-twente-the-netherlands","title":"University of Twente (The Netherlands)","text":""},{"location":"partners/#university-of-oslo-norway","title":"University of Oslo (Norway)","text":""},{"location":"partners/#university-of-bergen-norway","title":"University of Bergen (Norway)","text":""},{"location":"partners/#vrije-universiteit-amsterdam-the-netherlands","title":"Vrije Universiteit Amsterdam (The Netherlands)","text":""},{"location":"partners/#surf-the-netherlands","title":"SURF (The Netherlands)","text":""},{"location":"software_layer/","title":"Software layer","text":"

The top layer of the EESSI project is the software layer, which provides the actual scientific software installations.

To install the software we include in our stack, we use EasyBuild, a framework for installing scientific software on HPC systems. These installations are optimized for a particular system architecture (specific CPU and GPU generation).

To access these software installation we provide environment module files and use Lmod, a modern environment modules tool which has been widely adopted in the HPC community in recent years.

We leverage the archspec Python library to automatically select the best suited part of the software stack for a particular host, based on its system architecture.

The software layer is maintained through our https://github.com/EESSI/software-layer GitHub repository.

"},{"location":"software_testing/","title":"Software testing","text":"

This page has been replaced with test-suite, update your bookmarks!

"},{"location":"support/","title":"Getting support for EESSI","text":"

Thanks to the MultiXscale EuroHPC project we are able to provide support to the users of EESSI.

The EESSI support portal is hosted in GitLab: https://gitlab.com/eessi/support.

"},{"location":"support/#open-issue","title":"How to report a problem or ask a question","text":"

We recommend you to use a GitLab account if you want to get help from the EESSI support team.

If you have a GitLab account you can submit your problems or questions on EESSI via the issue tracker of the EESSI support portal at https://gitlab.com/eessi/support/-/issues. Please use one of the provided templates (report a problem, software request, question, ...) when creating an issue.

You can also contact us via our e-mail address support (@) eessi.io, which will automatically create a (private) issue in the EESSI support portal. When you send us an email, please provide us with as much information as possible on your question or problem. You can find an overview of the information that we would like to receive in the README of the EESSI support portal.

"},{"location":"support/#level-of-support","title":"Level of Support","text":"

We provide support for EESSI according to a \"reasonable effort\" standard. That means we will go into reasonable effort to help you, but we may not have the time to explore every potential cause, and it may not lead to a (quick) solution. You can compare this to the level of support you typically get from other active open source projects.

Note that the more complete your reported issue is (e.g. description of the error, what you ran, the software environment in which you ran, minimal reproducer, etc.) the bigger the chance is that we can help you with \"reasonable effort\".

"},{"location":"support/#what-do-we-provide-support-for","title":"What do we provide support for","text":""},{"location":"support/#accessing-and-using-the-eessi-software-stack","title":"Accessing and using the EESSI software stack","text":"

If you have trouble connecting to the software stack, such as trouble related to installing or configuring CernVM-FS to access the EESSI filesystem layer, or running the software installations included in the EESSI compatibility layer or software layer, please contact us.

Note that we can only help with problems related to the software installations (getting the software to run, to perform as expected, etc.). We do not provide support for using specific features of the provided software, nor can we fix (known or unknown) bugs in the software included in EESSI. We can only help with diagnosing and fixing problems that are caused by how the software was built and installed in EESSI.

"},{"location":"support/#software-requests","title":"Software requests","text":"

We are open to software requests for software that is not included in EESSI yet.

The quickest way to add additional software to EESSI is by contributing it yourself as a community contribution, please see the documentation on adding software.

Alternatively, you can send in a request to our support team. Please try to provide as much information on the software as possible: preferably use the issue template (which requires you to log in to GitLab), or make sure to cover the items listed here.

Be aware that we can only provide software that has an appropriate open source license.

"},{"location":"support/#eessi-test-suite","title":"EESSI test suite","text":"

If you are using the EESSI test suite, you can get help via the EESSI support portal.

"},{"location":"support/#build-and-deploy-bot","title":"Build-and-deploy bot","text":"

If you are using the EESSI build-and-deploy bot, you can get help via the EESSI support portal.

"},{"location":"support/#what-do-we-not-provide-support-for","title":"What do we not provide support for","text":"

Do not contact the EESSI support team to get help with using software that is included in EESSI, unless you think the problems you are seeing are related to how the software was built and installed.

Please consult the documentation of the software you are using, or contact the developers of the software directly, if you have questions regarding using the software, or if you think you have found a bug.

Funded by the European Union. This work has received funding from the European High Performance Computing Joint Undertaking (JU) and countries participating in the project under grant agreement No 101093169.

"},{"location":"systems/","title":"Systems on which EESSI is available natively","text":"

This page lists the HPC systems (that we know of) on which EESSI is available system-wide.

On these systems, you should be able to initialise your session environment for using EESSI as documented here, and you can try running our demos.

Please report additional systems on which EESSI is available

If you know of one or more systems on which EESSI is available system-wide that are not listed here yet, please let us know by contacting the EESSI support team, so we can update this page (or open a pull request).

What if EESSI is not available system-wide yet?

If EESSI is not available yet on the HPC system(s) that you use, contact the corresponding support team and submit a request to make it available.

You can point them to our documentation:

If they have any questions, please suggest to contact the EESSI support team.

In the meantime, you can try using one of the alternative ways of accessing EESSI, like using a container.

"},{"location":"systems/#eurohpc-ju-systems","title":"EuroHPC JU systems","text":"

EESSI is available on several of the EuroHPC JU supercomputers.

"},{"location":"systems/#karolina-czech-republic","title":"Karolina (Czech Republic)","text":"

Karolina is the EuroHPC JU supercomputer hosted by IT4Innovations.

"},{"location":"systems/#vega-slovenia","title":"Vega (Slovenia)","text":"

Vega is the EuroHPC JU supercomputer hosted by the Institute for Information Science (IZUM).

"},{"location":"systems/#other-european-systems","title":"Other European systems","text":""},{"location":"systems/#belgium","title":"Belgium","text":""},{"location":"systems/#ghent-university","title":"Ghent University","text":""},{"location":"systems/#vrije-universiteit-brussel","title":"Vrije Universiteit Brussel","text":""},{"location":"systems/#germany","title":"Germany","text":""},{"location":"systems/#embl-heidelberg","title":"EMBL Heidelberg","text":""},{"location":"systems/#university-of-stuttgart","title":"University of Stuttgart","text":""},{"location":"systems/#greece","title":"Greece","text":""},{"location":"systems/#aristotle-university-of-thessaloniki","title":"Aristotle University of Thessaloniki","text":""},{"location":"systems/#netherlands","title":"Netherlands","text":""},{"location":"systems/#surf","title":"SURF","text":""},{"location":"systems/#university-of-groningen","title":"University of Groningen","text":""},{"location":"systems/#norway","title":"Norway","text":""},{"location":"systems/#sigma2-as-norwegian-research-infrastructure-services","title":"Sigma2 AS / Norwegian Research Infrastructure Services","text":""},{"location":"talks/","title":"Talks related to EESSI","text":""},{"location":"talks/#2023","title":"2023","text":""},{"location":"adding_software/adding_development_software/","title":"Adding software to dev.eessi.io","text":"

dev.eessi.io is still in active development and focused on MultiXscale

The dev.eessi.io repository and functionality is still in its early stages. The repository itself and build + deploy procedure for it are functional, but may change often for the time being.

Our focus is currently on including and supporting developers and applications in the MultiXscale CoE.

"},{"location":"adding_software/adding_development_software/#what-is-deveessiio","title":"What is dev.eessi.io?","text":"

dev.eessi.io is the development repository of EESSI.

"},{"location":"adding_software/adding_development_software/#adding-software","title":"Adding software","text":"

Using dev.eessi.io is similar to using EESSI's production repository software.eessi.io. Software builds are triggered by a bot listening to pull requests in GitHub repositories. These builds require custom easyconfig and easystack files, which should be in specific directories.

To see this in practice, refer to the dev.eessi.io-example GitHub repository. In this GitHub repository you will find templates for some software installations with the appropriate directory structure, that is:

dev.eessi.io-example\n\u251c\u2500\u2500 easyconfigs\n\u2514\u2500\u2500 easystacks\n
"},{"location":"adding_software/adding_development_software/#quick-steps-to-build-for-deveessiio","title":"Quick steps to build for dev.eessi.io","text":""},{"location":"adding_software/adding_development_software/#installation-details","title":"Installation details","text":""},{"location":"adding_software/adding_development_software/#easyconfig-files-and-software-commit","title":"easyconfig files and --software-commit","text":"

The approach to build and install software is similar to that of software.eessi.io. It requires one or more easyconfig files. Easyconfig files used for building for dev.eessi.io do not need to be a part of an EasyBuild release, unlike builds for software.eessi.io. In this case, the development easyconfigs can be located under easyconfigs/ in the dev.eessi.io repository being used.

To allow for development builds, we leverage the --software-commit functionality (requires EasyBuild v4.9.3 or higher). This lets us build a given application from a specific commit in repository. This can also be done from a fork, by changing the github_account field in the easyconfig file. We've created a template for ESPResSo based on the standard eaasyconfig of the most recent version. The relevant fields are:

easyblock = 'CMakeMake'\n\nname = 'ESPResSo'\nversion = 'devel'\nversionsuffix = '-%(software_commit)s'\n\nhomepage = 'https://espressomd.org/wordpress'\ndescription = \"\"\"A software package for performing and analyzing scientific Molecular Dynamics simulations.\"\"\"\n\ngithub_account = 'espressomd'\nsource_urls = ['https://github.com/%(github_account)s/%(name)s/archive/']\n\nsources = ['%(software_commit)s.tar.gz']\n

--software-commit disables --robot

Using --software-commit disables the use of --robot, so make sure that you explicitly include new dependencies that might need to be installed. Otherwise, the easyconfig files won't be found.

You can also make additional changes to the easyconfig file, for example, if the new functionality requires new build or runtime dependencies, patches, configuration options, etc. It's a good idea to try installing from a specific commit locally first, to at least see if everything is parsed correctly and confirm that the right sources are being downloaded.

While the process to build for dev.eessi.io is similar to the one for the production repository, there are a few additional details to keep in mind.

"},{"location":"adding_software/adding_development_software/#software-version","title":"Software version","text":"

Installations to the EESSI production repository refer to specific versions of applications. However, development builds can't follow the same approach as they are most often not pegged to a release. Because of this, it is possible to include a descriptive \"version\" label to the version parameter in the easyconfig file for a given (set of) installations.

Note that some applications are built with custom easyblocks, which may use the version parameter to determine how the installation is meant to work (for example, recent versions need to copy files from to a new directory). Make sure that you account for this, otherwise you may install software differently than intended. If you encounter issues, you can open an issue in our support portal.

"},{"location":"adding_software/adding_development_software/#installing-dependencies","title":"Installing dependencies","text":"

Installations in dev.eessi.io are done on top of software.eessi.io. That means if your development build depends on some application that is already installed in software.eessi.io, then that will simply be used. However, if you need to add a new dependency, then this must included as part of the build. That means including an easyconfig file for it, and adding it to the right easystack file.

"},{"location":"adding_software/adding_development_software/#using-commit-ids-or-tags-for-software-commit","title":"Using commit IDs or tags for --software-commit","text":"

Installing with --software-commit requires that you include either a commit ID or a tag. The installation procedure will use this to obtain the sources for the build. Because tags can be changed to point to a different commit ID, we recommend you avoid using them and sticking to the commit ID itself. You can then include this in the versionsuffix on your easyconfig file, to generate a unique (though \"ugly\") module name.

"},{"location":"adding_software/adding_development_software/#patch-files","title":"Patch files","text":"

If your specific development build requires patch files, you should add these to the easyconfigs/ directory. If the necessary patch is part of an EasyBuild release, then this may not be necessary, as these will be directly taken from EasyBuild. If it is a new patch that is not on an EasyBuild release, then include it in the easyconfigs/ directory.

"},{"location":"adding_software/adding_development_software/#checksums","title":"Checksums","text":"

EasyBuild's easyconfig files typically contain checksums as their use is highly recommended. By default, EasyBuild will compute the checksums of sources and patch files it needs for a given installation, and compare them with the values in the easyconfig file. Because builds for dev.eessi.io change much more often, hard coded checksums become a problem, as they'd need to be updated with every new build. For this reason, we recommend not including checksums in your development easyconfig files (unless you need to, for a specific reason).

"},{"location":"adding_software/adding_development_software/#easystack-files","title":"Easystack files","text":"

After an easyconfig file has been created and added to the easyconfigs subdirectory, an easystack file that picks it up needs to be in place so that a build can be triggered.

Naming convention for easystack files

The easystack files must follow a naming convention and be named something like: software-eb-X.Y.Z-dev.yml, where X.Y.Z correspond to the EasyBuild version used to install the software. Following our example for ESPREsSo, it would look like:

easyconfigs:\n  - ESPResSo-devel-foss-2023a-software-commit.eb:\n      options:\n        software-commit: 2ba17de6096933275abec0550981d9122e4e5f28 # release 4.2.2\n

ESPResSo-devel-foss-2023a-software-commit.eb would be the name of the easyconfig file added in our example step above. Note the option passing the software-commit for the development version that should be built. For the sake of this example, the chosen commit actually corresponds to the 4.2.2 release of ESPResSo.

"},{"location":"adding_software/adding_development_software/#triggering-builds","title":"Triggering builds","text":"

We use the EESSI build-test-deploy bot to handle software builds. All one needs to do is open a PR with the changes adding the easyconfig and easystack files and commenting bot: build. This can only be done by previously authorized users. The current build cluster for dev.eessi.io builds only for the zen2 CPU microarchitecture, but this is likely to change.

Once a build is complete and the bot:deploy label is added, a staging PR can be merged to deploy the application to the dev.eessi.io cvmfs repository. On a system with dev.eessi.io mounted, then all that is left is to module use /cvmfs/dev.eessi.io/versions/2023.06/modules/all and try out the software!

There is currently no initialisation script or module for dev.eessi.io, but this feature is coming soon.

"},{"location":"adding_software/building_software/","title":"Building software","text":"

(for maintainers)

"},{"location":"adding_software/building_software/#bot_build","title":"Instructing the bot to build","text":"

Once the pull request is open, you can instruct the bot to build the software by posting a comment.

For more information, see the building section in the bot documentation.

Warning

Permission to trigger building of software must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/building_software/#guidelines","title":"Guidelines","text":""},{"location":"adding_software/building_software/#checking-the-builds","title":"Checking the builds","text":"

If all goes well, you should see SUCCESS for each build, along with button to get more information about the checks that were performed, and metadata information on the resulting artefact .

Note

Make sure the result is what you expect it to be for all builds before you deploy!

"},{"location":"adding_software/building_software/#failing-builds","title":"Failing builds","text":"

Warning

The bot will currently not give you any information on how or why a build is failing.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/building_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"

To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.

For more information, see the deploying section in the bot documentation.

Warning

Permission to trigger deployment of software installations must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/building_software/#merging-the-pull-request","title":"Merging the pull request","text":"

You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.

Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.

Note

This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml) that checks for missing installations, in the correct branch (for example 2023.06) of the software-layer.

If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!

Warning

You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/building_software/#getting-help","title":"Getting help","text":"

If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer channel of the EESSI Slack.

"},{"location":"adding_software/contribution_policy/","title":"Contribution policy","text":"

(version v0.1.0 - updated 9 Nov 2023)

Note

This policy is subject to change, please check back regularly.

"},{"location":"adding_software/contribution_policy/#purpose","title":"Purpose","text":"

The purpose of this contribution policy is to provide guidelines for adding software to EESSI.

It informs about what requirements must be met in order for software to be eligible for inclusion in the EESSI software layer.

"},{"location":"adding_software/contribution_policy/#requirements","title":"Requirements","text":"

The following requirements must be taken into account when adding software to EESSI.

Note that additional restrictions may apply in specific cases that are currently not covered explicitly by this policy.

"},{"location":"adding_software/contribution_policy/#freely_redistributable_software","title":"i) Freely redistributable software","text":"

Only freely redistributable software can be added to the EESSI repository, and we strongly prefer including only open source software in EESSI.

Make sure that you are aware of the relevant software licenses, and that redistribution of the software you want to add to EESSI is allowed.

For more information about a specific software license, see the SPDX license list.

Note

We intend to automatically verify that this requirement is met, by requiring that the SPDX license identifier is provided for all software included in EESSI.

"},{"location":"adding_software/contribution_policy/#built_by_bot","title":"ii) Built by the bot","text":"

All software included in the EESSI repository must be built autonomously by our bot .

For more information, see our semi-automatic software installation procedure.

"},{"location":"adding_software/contribution_policy/#easybuild","title":"iii) Built and installed with EasyBuild","text":"

We currently require that all software installations in EESSI are built and installed using EasyBuild.

We strongly prefer that the latest release of EasyBuild that is available at the time is used to add software to EESSI.

The use of --from-pr and --include-easyblocks-from-pr to pull in changes to EasyBuild that are required to make the installation work correctly in EESSI is allowed, but only if that is strictly required (that is, if those changes are not included yet in the latest EasyBuild release).

"},{"location":"adding_software/contribution_policy/#supported_toolchain","title":"iv) Supported compiler toolchain","text":"

A compiler toolchain that is still supported by the latest EasyBuild release must be used for building the software.

For more information on supported toolchains, see the EasyBuild toolchain support policy.

"},{"location":"adding_software/contribution_policy/#recent_toolchains","title":"v) Recent toolchain versions","text":"

We strongly prefer adding software to EESSI that was built with a recent compiler toolchain.

When adding software to a particular version of EESSI, you should use a toolchain version that is already installed.

If you would like to see an additional toolchain version being added to a particular version of EESSI, please open a support request for this, and motivate your request.

"},{"location":"adding_software/contribution_policy/#recent_software_versions","title":"vi) Recent software versions","text":"

We strongly prefer adding sufficiently recent software versions to EESSI.

If you would like to add older software versions, please clearly motivate the need for this in your contribution.

"},{"location":"adding_software/contribution_policy/#cpu_targets","title":"vii) CPU targets","text":"

Software that is added to EESSI should work on all supported CPU targets.

Exceptions to this requirement are allowed if technical problems that can not be resolved with reasonable effort prevent the installation of the software for specific CPU targets.

"},{"location":"adding_software/contribution_policy/#testing","title":"viii) Testing","text":"

We should be able to test the software installations via the EESSI test suite, in particular for software applications and user-facing tools.

Ideally one or more tests are available that verify that the software is functionally correct, and that it (still) performs well.

Tests that are run during the software installation procedure as performed by EasyBuild must pass. Exceptions can be made if only a small subset of tests fail for specific CPU targets, as long as these exceptions are tracked and an effort is made to assess the impact of those failing tests.

It should be possible to run a minimal smoke test for the software included in EESSI, for example using EasyBuild's --sanity-check-only feature.

Note

The EESSI test suite is still in active development, and currently only has a minimal set of tests available.

When the test suite is more mature, this requirement will be enforced more strictly.

"},{"location":"adding_software/contribution_policy/#changelog","title":"Changelog","text":""},{"location":"adding_software/contribution_policy/#v010-9-nov-2023","title":"v0.1.0 (9 Nov 2023)","text":""},{"location":"adding_software/debugging_failed_builds/","title":"Debugging failed builds","text":"

(for contributors + maintainers)

Unfortunately, software does not always build successfully. Since EESSI targets novel CPU architectures as well, build failures on such platforms are quite common, as the software and/or the software build systems have not always been adjusted to support these architectures yet.

In EESSI, all software packages are built by a bot. This is great for builds that complete successfully as we can build many software packages for a wide range of hardware with little human intervention. However, it does mean that you, as contributor, can not easily access the build directory and build logs to figure out build issues.

This page describes how you can interactively reproduce failed builds, so that you can more easily debug the issue.

Throughout this page, we will use this PR as an example. It intends to add LAMMPS to EESSI. Among other issues, it failed on a building Plumed.

"},{"location":"adding_software/debugging_failed_builds/#prerequisites","title":"Prerequisites","text":"

You will need to have:

"},{"location":"adding_software/debugging_failed_builds/#preparing-the-environment","title":"Preparing the environment","text":"

A number of steps are needed to create the same environment in which the bot builds.

"},{"location":"adding_software/debugging_failed_builds/#fetching-the-feature-branch","title":"Fetching the feature branch","text":"

Looking at the example PR, we see the PR is created from this fork. First, we clone the fork, then checkout the feature branch (LAMMPS_23Jun2022)

git clone https://github.com/laraPPr/software-layer/\ncd software-layer\ngit checkout LAMMPS_23Jun2022\n
Alternatively, if you already have a clone of the software-layer you can add it as a new remote
cd software-layer\ngit remote add laraPPr https://github.com/laraPPr/software-layer/\ngit fetch laraPPr\ngit checkout LAMMPS_23Jun2022\n

"},{"location":"adding_software/debugging_failed_builds/#starting-a-shell-in-the-eessi-container","title":"Starting a shell in the EESSI container","text":"

Simply run the EESSI container (eessi_container.sh), which should be in the root of the software-layer repository. Use -r to specify which EESSI repository (e.g. software.eessi.io, dev.eessi.io, ...) should be mounted in the container

./eessi_container.sh --access rw -r software.eessi.io\n

If you want to install NVIDIA GPU software, make sure to also add the --nvidia all argument, to insure that your GPU drivers get mounted inside the container:

./eessi_container.sh --access rw -r software.eessi.io --nvidia all\n

Note

You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS: have been printed after the first prompt Apptainer> was shown.

"},{"location":"adding_software/debugging_failed_builds/#more-efficient-approach-for-multiplecontinued-debugging-sessions","title":"More efficient approach for multiple/continued debugging sessions","text":"

While the above works perfectly well, you might not be able to complete your debugging session in one go. With the above approach, several steps will just be repeated every time you start a debugging session:

To avoid this, we create two directories. One holds the container & host_injections, which are (typically) common between multiple PRs and thus you don't have to redownload the container / reinstall the host_injections if you start working on another PR. The other will hold the PR-specific data: a tarball storing the software you'll build in your interactive debugging session. The paths we pick here are just example, you can pick any persistent, writeable location for this:

eessi_common_dir=${HOME}/eessi-manual-builds\neessi_pr_dir=${HOME}/pr360\n

Now, we start the container

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}\n

Here, the SINGULARITY_CACHEDIR makes sure that if the container was already downloaded, and is present in the cache, it is not redownloaded. The host injections will just be picked up from ${eessi_common_dir}/host_injections (if those were already installed before). And finally, the --save makes sure that everything that you build in the container gets stored in a tarball as soon as you exit the container.

Note that the first exit command will first make you exit the Gentoo prefix environment. Only the second will take you out of the container, and print where the tarball will be stored:

[EESSI 2023.06] $ exit\nlogout\nLeaving Gentoo Prefix with exit status 1\nApptainer> exit\nexit\nSaved contents of tmp directory '/tmp/eessi-debug.VgLf1v9gf0' to tarball '${HOME}/pr360/EESSI-1698056784.tgz' (to resume session add '--resume ${HOME}/pr360/EESSI-1698056784.tgz')\n

Note that the tarballs can be quite sizeable, so make sure to pick a filesystem where you have a large enough quotum.

Next time you want to continue investigating this issue, you can start the container with --resume DIR/TGZ and continue where you left off, having all dependencies already built and available.

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir}/EESSI-1698056784.tgz\n

For a detailed description on using the script eessi_container.sh, see here.

Note

Reusing a previously downloaded container, or existing CUDA installation from a host_injections is not be a good approach if those could be the cause of your issues. If you are unsure if this is the case, simply follow the regular approach to starting the EESSI container.

Note

It is recommended to clean the container cache and host_injections directories every now and again, to make sure you pick up the latest changes for those two components.

"},{"location":"adding_software/debugging_failed_builds/#start-the-gentoo-prefix-environment","title":"Start the Gentoo Prefix environment","text":"

The next step is to start the Gentoo Prefix environment.

First, you'll have to set which repository and version of EESSI you are building for. For example:

export EESSI_CVMFS_REPO=/cvmfs/software.eessi.io\nexport EESSI_VERSION=2023.06\n

Then, we set EESSI_OS_TYPE and EESSI_CPU_FAMILY and run the startprefix command to start the Gentoo Prefix environment:

export EESSI_OS_TYPE=linux  # We only support Linux for now\nexport EESSI_CPU_FAMILY=$(uname -m)\n${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/startprefix\n

Unfortunately, there is no way to retain the ${EESSI_CVMFS_REPO} and ${EESSI_VERSION} in your prefix environment, so we have to set them again. For example:

export EESSI_CVMFS_REPO=/cvmfs/software.eessi.io\nexport EESSI_VERSION=2023.06\n

Note

By activating the Gentoo Prefix environment, the system tools (e.g. ls) you would normally use are now provided by Gentoo Prefix, instead of the container OS. E.g. running which ls after starting the prefix environment as above will return /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/bin/ls. This makes the builds completely independent from the container OS.

"},{"location":"adding_software/debugging_failed_builds/#building-for-the-generic-optimization-target","title":"Building for the generic optimization target","text":"

If you want to replicate a build with generic optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic) you will need to set the following environment variable:

export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic\n

"},{"location":"adding_software/debugging_failed_builds/#building-software-with-the-eessi-install-softwaresh-script","title":"Building software with the EESSI-install-software.sh script","text":"

The Automatic build and deploy bot installs software by executing the EESSI-install-software.sh script. The advantage is that running this script is the closest you can get to replicating the bot's behaviour - and thus the failure. The downside is that if a PR adds a lot of software, it may take quite a long time to run - even if you might already know what the problematic software package is. In that case, you might be better off following the steps under Building software from an easystack file or Building an individual package.

Note that you could also combine approaches: first build everything using the EESSI-install-software.sh script, until you reproduce the failure. Then, start making modifications (e.g. changes to the EasyConfig, patches, etc) and trying to rebuild that package individually to test your changes.

To build software using the EESSI-install-software.sh script, you'll first need to get the diff file for the PR. This is used by the EESSI-install-software.sh script to see what is changed in this PR - and thus what needs to be build for this PR. To download the diff for PR 360, we would e.g. do

wget https://github.com/EESSI/software-layer/pull/360.diff\n

Now, we run the EESSI-install-software.sh script:

./EESSI-install-software.sh\n
"},{"location":"adding_software/debugging_failed_builds/#building-software-from-an-easystack-file","title":"Building software from an easystack file","text":""},{"location":"adding_software/debugging_failed_builds/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"

To activate the software environment, run

source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n

Note

If you get an error bash: /versions//init/bash: No such file or directory, you forgot to reset the ${EESSI_CVMFS_REPO} and ${EESSI_VERSION} environment variables at the end of the previous step.

Note

If you want to build with generic optimization, you should run export EESSI_CPU_FAMILY=$(uname -m) && export EESSI_SOFTWARE_SUBDIR_OVERRIDE=${EESSI_CPU_FAMILY}/generic before sourcing.

For more info on starting the EESSI software environment, see here

"},{"location":"adding_software/debugging_failed_builds/#configure-easybuild","title":"Configure EasyBuild","text":"

It is important that we configure EasyBuild in the same way as the bot uses it, with one small exceptions: our working directory will be different. Typically, that doesn't matter, but it's good to be aware of this one difference, in case you fail to replicate the build failure.

In this example, we create a unique temporary directory inside /tmp to serve both as our workdir. Finally, we will source the configure_easybuild script, which will configure EasyBuild by setting environment variables.

export WORKDIR=$(mktemp --directory --tmpdir=/tmp  -t eessi-debug.XXXXXXXXXX)\nsource scripts/utils.sh && source configure_easybuild\n
Among other things, the configure_easybuild script sets the install path for EasyBuild to point to the correct installation directory in (to ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_SOFTWARE_SUBDIR}). This is the exact same path the bot uses to build, and uses a writeable overlay filesystem in the container to write to a path in /cvmfs (which normally is read-only). This is identical to what the bot does.

Note

If you started the container using --resume, you may want WORKDIR to point to the workdir you created previously (instead of creating a new, temporary directory with mktemp).

Note

If you want to replicate a build with generic optimization (i.e. in $EESSI_CVMFS_REPO/versions/${EESSI_VERSION}/software/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}/generic) you will need to set export EASYBUILD_OPTARCH=GENERIC after sourcing configure_easybuild.

Next, we need to determine the correct version of EasyBuild to load. Since the example PR changes the file eessi-2023.06-eb-4.8.1-2021b.yml, this tells us the bot was using version 4.8.1 of EasyBuild to build this. Thus, we load that version of the EasyBuild module and check if everything was configured correctly:

module load EasyBuild/4.8.1\neb --show-config\n
You should get something similar to

#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath            (E) = /tmp/easybuild/easybuild/build\ncontainerpath        (E) = /tmp/easybuild/easybuild/containers\ndebug                (E) = True\nexperimental         (E) = True\nfilter-deps          (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib, Yasm\nfilter-env-vars      (E) = LD_LIBRARY_PATH\nhooks                (E) = ${HOME}/software-layer/eb_hooks.py\nignore-osdeps        (E) = True\ninstallpath          (E) = /tmp/easybuild/software/linux/aarch64/neoverse_n1\nmodule-extensions    (E) = True\npackagepath          (E) = /tmp/easybuild/easybuild/packages\nprefix               (E) = /tmp/easybuild/easybuild\nread-only-installdir (E) = True\nrepositorypath       (E) = /tmp/easybuild/easybuild/ebfiles_repo\nrobot-paths          (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/aarch64/neoverse_n1/software/EasyBuild/4.8.1/easybuild/easyconfigs\nrpath                (E) = True\nsourcepath           (E) = /tmp/easybuild/easybuild/sources:\nsysroot              (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/aarch64\ntrace                (E) = True\nzip-logs             (E) = bzip2\n
"},{"location":"adding_software/debugging_failed_builds/#building-everything-in-the-easystack-file","title":"Building everything in the easystack file","text":"

In our example PR, the easystack file that was changed was eessi-2023.06-eb-4.8.1-2021b.yml. To build this, we run (in the directory that contains the checkout of this feature branch):

eb --easystack eessi-2023.06-eb-4.8.1-2021b.yml --robot\n
After some time, this build fails while trying to build Plumed, and we can access the build log to look for clues on why it failed.

"},{"location":"adding_software/debugging_failed_builds/#building-an-individual-package","title":"Building an individual package","text":"

First, prepare the environment by following the [Starting the EESSI software environment][#starting-the-eessi-software-environment] and Configure EasyBuild above.

In our example PR, the individual package that was added to eessi-2023.06-eb-4.8.1-2021b.yml was LAMMPS-23Jun2022-foss-2021b-kokkos.eb. To mimic the build behaviour, we'll also have to (re)use any options that are listed in the easystack file for LAMMPS-23Jun2022-foss-2021b-kokkos.eb, in this case the option --from-pr 19000. Thus, to build, we run:

eb LAMMPS-23Jun2022-foss-2021b-kokkos.eb --robot --from-pr 19000\n
After some time, this build fails while trying to build Plumed, and we can access the build log to look for clues on why it failed.

Note

While this might be faster than the easystack-based approach, this is not how the bot builds. So why it may reproduce the failure the bot encounters, it may not reproduce the bug at all (no failure) or run into different bugs. If you want to be sure, use the easystack-based approach.

"},{"location":"adding_software/debugging_failed_builds/#rebuilding-software","title":"Rebuilding software","text":"

Rebuilding software requires an additional step at the beginning: the software first needs to be removed. We assume you've already checked out the feature branch. Then, you need to start the container with the additional --fakeroot argument, otherwise you will not be able to remove files from the /cvmfs prefix. Make sure to also include the --save argument, as we will need the tarball later on. E.g.

SINGULARITY_CACHEDIR=${eessi_common_dir}/container_cache ./eessi_container.sh --access rw -r software.eessi.io --nvidia all --host-injections ${eessi_common_dir}/host_injections --save ${eessi_pr_dir} --fakeroot\n
Then, initialize the EESSI environment
source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\n
and get the diff file for the corresponding PR, e.g. for PR 123:
wget https://github.com/EESSI/software-layer/pull/123.diff\n
Finally, run the EESSI-remove-software.sh script
./EESSI-remove-software.sh`\n

This should remove any software specified in a rebuild easystack that got added in your current feature branch.

Now, exit the container, paying attention to the instructions that are printed to resume later, e.g.:

Saved contents of tmp directory '/tmp/eessi.WZxeFUemH2' to tarball '/home/myuser/pr507/EESSI-1711538681.tgz' (to resume session add '--resume /home/myuser/pr507/EESSI-1711538681.tgz')\n

Now, continue with the original instructions to start the container (i.e. either here or with this alternate approach) and make sure to add the --resume flag. This way, you are resuming from the tarball (i.e. with the software removed that has to be rebuilt), but in a new container in which you have regular (i.e. no root) permissions.

"},{"location":"adding_software/debugging_failed_builds/#running-the-test-step","title":"Running the test step","text":"

If you are still in the prefix layer (i.e. after previously building something), exit it first:

$ exit\nlogout\nLeaving Gentoo Prefix with exit status 0\n
Then, source the EESSI init script (again):
Apptainer> source ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/init/bash\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} Apptainer>\n

Note

If you are in a SLURM environment, make sure to run for i in $(env | grep SLURM); do unset \"${i%=*}\"; done to unset any SLURM environment variables. Failing to do so will cause mpirun to pick up on these and e.g. infer how many slots are available. If you run into errors of the form \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\", you probably forgot this step.

Then, execute the run_tests.sh script. We are assuming you are still in the root of the software-layer repository that you cloned earlier:

./run_tests.sh\n
if all goes well, you should see (part of) the EESSI test suite being run by ReFrame, finishing with something like

[  PASSED  ] Ran X/Y test case(s) from Z check(s) (0 failure(s), 0 skipped, 0 aborted)\n

Note

If you are running on a system with hyperthreading enabled, you may still run into the \"There are not enough slots available in the system to satisfy the X slots that were requested by the application:\" error from mpirun, because hardware threads are not considered to be slots by default by OpenMPIs mpirun. In this case, run with OMPI_MCA_hwloc_base_use_hwthreads_as_cpus=1 ./run_tests.sh (for OpenMPI 4.X) or PRTE_MCA_rmaps_default_mapping_policy=:hwtcpus ./run_tests.sh (for OpenMPI 5.X).

"},{"location":"adding_software/debugging_failed_builds/#known-causes-of-issues-in-eessi","title":"Known causes of issues in EESSI","text":""},{"location":"adding_software/debugging_failed_builds/#the-custom-system-prefix-of-the-compatibility-layer","title":"The custom system prefix of the compatibility layer","text":"

Some installations might expect the system root (sysroot, for short) to be in /. However, in case of EESSI, we are building against the OS in the compatibility layer. Thus, our sysroot is something like ${EESSI_CVMFS_REPO}/versions/${EESSI_VERSION}/compat/${EESSI_OS_TYPE}/${EESSI_CPU_FAMILY}. This can cause issues if installation procedures assume the sysroot is in /.

One example of a sysroot issue was in installing wget. The EasyConfig for wget defined

# make sure pkg-config picks up system packages (OpenSSL & co)\npreconfigopts = \"export PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/lib/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl '\n
This will not work in EESSI, since the OpenSSL should be picked up from the compatibility layer. This was fixed by changing the EasyConfig to read
preconfigopts = \"export PKG_CONFIG_PATH=%(sysroot)s/usr/lib64/pkgconfig:%(sysroot)s/usr/lib/pkgconfig:%(sysroot)s/usr/lib/x86_64-linux-gnu/pkgconfig && \"\nconfigopts = '--with-ssl=openssl\n
The %(sysroot)s is a template value which EasyBuild will resolve to the value that has been configured in EasyBuild for sysroot (it is one of the fields printed by eb --show-config if a non-standard sysroot is configured).

If you encounter issues where the installation can not find something that is normally provided by the OS (i.e. not one of the dependencies in your module environment), you may need to resort to a similar approach.

"},{"location":"adding_software/debugging_failed_builds/#the-writeable-overlay","title":"The writeable overlay","text":"

The writeable overlay in the container is known to be a bit slow sometimes. Thus, we have seen tests failing because they exceed some timeout (e.g. this issue).

To investigate if the writeable overlay is somehow the issue, you can make sure the installation gets done somewhere else, e.g. in the temporary directory in /tmp that you created as workdir. To do this, set

export EASYBUILD_INSTALLPATH=${WORKDIR}\n

after the step in which you have sourced the configure_easybuild script. Note that in order to find (with module av) any modules that get installed here, you will need to add this path to the MODULEPATH:

module use ${EASYBUILD_INSTALLPATH}/modules/all\n

Then, retry building the software (as described above). If the build now succeeds, you know that indeed the writeable overlay caused the issue. We have to build in this writeable overlay when we do real deployments. Thus, if you hit such a timeout, try to see if you can (temporarily) modify the timeout value in the test so that it passes.

"},{"location":"adding_software/deploying_software/","title":"Deploying software","text":"

(for maintainers)

"},{"location":"adding_software/deploying_software/#instructing-the-bot-to-deploy","title":"Instructing the bot to deploy","text":"

To make the bot deploy the successfully built software, you should issue the corresponding instruction to the bot.

For more information, see the deploying section in the bot documentation.

Warning

Permission to trigger deployment of software installations must be granted to your GitHub account first!

See bot permissions for more information.

"},{"location":"adding_software/deploying_software/#merging-the-pull-request","title":"Merging the pull request","text":"

You should be able to verify in the pull request that the ingestion has been done, since the CI should fail initially to indicate that some software installations listed in your modified easystack are missing.

Once the ingestion has been done, simply re-triggering the CI workflow should be sufficient to make it pass , and then the pull request can be merged.

Note

This assumes that the easystack file being modified is considered by the CI workflow file (.github/workflows/test_eessi.yml) that checks for missing installations, in the correct branch (for example 2023.06) of the software-layer.

If that's not the case yet, update this workflow in your pull request as well to add the missing easystack file!

Warning

You need permissions to re-trigger CI workflows and merge pull requests in the software-layer repository.

Ask for help in the #software-layer channel of the EESSI Slack if needed!

"},{"location":"adding_software/deploying_software/#getting-help","title":"Getting help","text":"

If you have any questions, or if you need help with something, don't hesitate to contact us via the #software-layer channel of the EESSI Slack.

"},{"location":"adding_software/opening_pr/","title":"Opening a pull request","text":"

(for contributors)

To add software to EESSI, you should go through the semi-automatic software installation procedure by:

Warning

Make sure you are also aware of our contribution policy when adding software to EESSI.

"},{"location":"adding_software/opening_pr/#preparation","title":"Preparation","text":"

Before you can make a pull request to the software-layer, you should fork the repository in your GitHub account.

For the remainder of these instructions, we assume that your GitHub account is @koala .

Note

Don't forget to replace koala with the name of your GitHub account in the commands below!

1) Clone the EESSI/software-layer repository:

mkdir EESSI\ncd EESSI\ngit clone https://github.com/EESSI/software-layer\ncd software-layer\n

2) Add your fork as a remote

git remote add koala git@github.com:koala/software-layer.git\n

3) Check out the branch that corresponds to the version of EESSI repository you want to add software to, for example 2023.06-software.eessi.io:

git checkout 2023.06-software.eessi.io\n

Note

The commands above only need to be run once, to prepare your setup for making pull requests.

"},{"location":"adding_software/opening_pr/#software_layer_pull_request","title":"Creating a pull request","text":"

1) Make sure that your 2023.06-software.eessi.io branch in the checkout of the EESSI/software-layer repository is up-to-date

cd EESSI/software-layer\ngit checkout 2023.06-software.eessi.io \ngit pull origin 2023.06-software.eessi.io \n

2) Create a new branch (use a sensible name, not example_branch as below), and check it out

git checkout -b example_branch\n

3) Determine the correct easystack file to change, and add one or more lines to it that specify which easyconfigs should be installed

echo '  - example-1.2.3-GCC-12.3.0.eb' >> easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\n
Note that the naming scheme is standardized and should be eessi-<eessi_version>-eb-<eb_version>-<toolchain_version>.yml. See the official EasyBuild documentation on easystack files for more information on the syntax.

4) Stage and commit the changes into your your branch with a sensible message

git add easystacks/software.eessi.io/2023.06/eessi-2023.06-eb-4.8.2-2023a.yml\ngit commit -m \"{2023.06}[GCC/12.3.0] example 1.2.3\"\n

5) Push your branch to your fork of the software-layer repository

git push koala example_branch\n

6) Go to the GitHub web interface to open your pull request, or use the helpful link that should show up in the output of the git push command.

Make sure you target the correct branch: the one that corresponds to the version of EESSI you want to add software to (like 2023.06-software.eessi.io).

If all goes well, one or more bots should almost instantly create a comment in your pull request with an overview of how it is configured - you will need this information when providing build instructions.

"},{"location":"adding_software/opening_pr/#rebuilding_software","title":"Rebuilding software","text":"

We typically do not rebuild software, since (strictly speaking) this breaks reproducibility for anyone using the software. However, there are certain situations in which it is difficult or impossible to avoid.

To do a rebuild, you add the software you want to rebuild to a dedicated easystack file in the rebuilds directory. Use the following naming convention: YYYYMMDD-eb-<EB_VERSION>-<APPLICATION_NAME>-<APPLICATION_VERSION>-<SHORT_DESCRIPTION>.yml, where YYYYMMDD is the opening date of your PR. E.g. 2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml was added in a PR on the 6th of May 2024 and used to rebuild CUDA-12.1.1 using EasyBuild 4.9.1 to resolve an issue with some runtime libraries missing from the initial CUDA 12.1.1 installation.

At the top of your easystack file, please use comments to include a short description, and make sure to include any relevant links to related issues (e.g. from the GitHub repositories of EESSI, EasyBuild, or the software you are rebuilding).

As an example, consider the full easystack file (2024.05.06-eb-4.9.1-CUDA-12.1.1-ship-full-runtime.yml) used for the aforementioned CUDA rebuild:

# 2024.05.06\n# Original matching of files we could ship was not done correctly. We were\n# matching the basename for files (e.g., libcudart.so from libcudart.so.12)\n# rather than the name stub (libcudart)\n# See https://github.com/EESSI/software-layer/pull/559\neasyconfigs:\n  - CUDA-12.1.1.eb:\n        options:\n                accept-eula-for: CUDA\n

By separating rebuilds in dedicated files, we still maintain a complete software bill of materials: it is transparent what got rebuilt, for which reason, and when.

"},{"location":"adding_software/overview/","title":"Overview of adding software to EESSI","text":"

We welcome contributions to the EESSI software stack. This page shows the procedure and provides links to the contribution policy and the technical details of making a contribution.

"},{"location":"adding_software/overview/#contribute-a-software-to-the-eessi-software-stack","title":"Contribute a software to the EESSI software stack","text":"
\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n    I(contributor)  \n    K(reviewer)\n    A(Is there an EasyConfig for software) -->|No|B(Create an EasyConfig and contribute it to EasyBuild)\n    A --> |Yes|D(Create a PR to software-layer)\n    B --> C(Evaluate and merge pull request)\n    C --> D\n    D --> E(Review PR & trigger builds)\n    E --> F(Debug build issue if needed)\n    F --> G(Deploy tarballs to S3 bucket)\n    G --> H(Ingest tarballs in EESSI by merging staging PRs)\n     classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n     class A,B,D,F,I blue\n     click B \"https://easybuild.io/\"\n     click D \"../opening_pr/\"\n     click F \"../debugging_failed_builds/\"\n
"},{"location":"adding_software/overview/#contributing-a-reframe-test-to-the-eessi-test-suite","title":"Contributing a ReFrame test to the EESSI test suite","text":"

Ideally, a contributor prepares a ReFrame test for the software to be added to the EESSI software stack.

\n%%{init: { 'theme':'forest', 'sequence': {'useMaxWidth':false} } }%%\nflowchart TB\n\n    Z(Create ReFrame test & PR to tests-suite) --> Y(Review PR & run new test)\n    Y --> W(Debug issue if needed) \n    W --> V(Review PR if needed)\n    V --> U(Merge PR)\n     classDef blue fill:#9abcff,stroke:#333,stroke-width:2px;\n     class Z,W blue\n
"},{"location":"adding_software/overview/#more-about-adding-software-to-eessi","title":"More about adding software to EESSI","text":"

If you need help with adding software to EESSI, please open a support request.

"},{"location":"available_software/overview/","title":"Available software (via modules)","text":"

This table gives an overview of all the available software in EESSI per specific CPU target.

Name aarch64 x86_64 amd intel generic neoverse_n1 neoverse_v1 generic zen2 zen3 zen4 haswell skylake_avx512"},{"location":"available_software/detail/ALL/","title":"ALL","text":"

A Load Balancing Library (ALL) aims to provide an easy way to include dynamicdomain-based load balancing into particle based simulation codes. The libraryis developed in the Simulation Laboratory Molecular Systems of the J\u00fclichSupercomputing Centre at Forschungszentrum J\u00fclich.

https://gitlab.jsc.fz-juelich.de/SLMS/loadbalancing

"},{"location":"available_software/detail/ALL/#available-modules","title":"Available modules","text":"

The overview below shows which ALL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ALL, load one of these modules using a module load command like:

module load ALL/0.9.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ALL/0.9.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/AOFlagger/","title":"AOFlagger","text":"

The AOFlagger is a tool that can find and remove radio-frequency interference (RFI)in radio astronomical observations. It can make use of Lua scripts to make flagging strategies flexible,and the tools are applicable to a wide set of telescopes.

https://aoflagger.readthedocs.io/

"},{"location":"available_software/detail/AOFlagger/#available-modules","title":"Available modules","text":"

The overview below shows which AOFlagger installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using AOFlagger, load one of these modules using a module load command like:

module load AOFlagger/3.4.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 AOFlagger/3.4.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/ASE/","title":"ASE","text":"

ASE is a python package providing an open source Atomic Simulation Environment in the Python scripting language.From version 3.20.1 we also include the ase-ext package, it contains optional reimplementationsin C of functions in ASE. ASE uses it automatically when installed.

https://wiki.fysik.dtu.dk/ase

"},{"location":"available_software/detail/ASE/#available-modules","title":"Available modules","text":"

The overview below shows which ASE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ASE, load one of these modules using a module load command like:

module load ASE/3.22.1-gfbf-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ASE/3.22.1-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/ASE/#ase3221-gfbf-2022b","title":"ASE/3.22.1-gfbf-2022b","text":"

This is a list of extensions included in the module:

ase-3.22.1, ase-ext-20.9.0, pytest-mock-3.8.2

"},{"location":"available_software/detail/ATK/","title":"ATK","text":"

ATK provides the set of accessibility interfaces that are implemented by other toolkits and applications. Using the ATK interfaces, accessibility tools have full access to view and control running applications.

https://developer.gnome.org/atk/

"},{"location":"available_software/detail/ATK/#available-modules","title":"Available modules","text":"

The overview below shows which ATK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ATK, load one of these modules using a module load command like:

module load ATK/2.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ATK/2.38.0-GCCcore-13.2.0 x x x x x x x x x ATK/2.38.0-GCCcore-12.3.0 x x x x x x x x x ATK/2.38.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Abseil/","title":"Abseil","text":"

Abseil is an open-source collection of C++ library code designed to augment theC++ standard library. The Abseil library code is collected from Google's ownC++ code base, has been extensively tested and used in production, and is thesame code we depend on in our daily coding lives.

https://abseil.io/

"},{"location":"available_software/detail/Abseil/#available-modules","title":"Available modules","text":"

The overview below shows which Abseil installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Abseil, load one of these modules using a module load command like:

module load Abseil/20240116.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Abseil/20240116.1-GCCcore-13.2.0 x x x x x x x x x Abseil/20230125.3-GCCcore-12.3.0 x x x x x x x x x Abseil/20230125.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Archive-Zip/","title":"Archive-Zip","text":"

Provide an interface to ZIP archive files.

https://metacpan.org/pod/Archive::Zip

"},{"location":"available_software/detail/Archive-Zip/#available-modules","title":"Available modules","text":"

The overview below shows which Archive-Zip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Archive-Zip, load one of these modules using a module load command like:

module load Archive-Zip/1.68-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Archive-Zip/1.68-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Armadillo/","title":"Armadillo","text":"

Armadillo is an open-source C++ linear algebra library (matrix maths) aiming towards a good balance between speed and ease of use. Integer, floating point and complex numbers are supported, as well as a subset of trigonometric and statistics functions.

https://arma.sourceforge.net/

"},{"location":"available_software/detail/Armadillo/#available-modules","title":"Available modules","text":"

The overview below shows which Armadillo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Armadillo, load one of these modules using a module load command like:

module load Armadillo/12.8.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Armadillo/12.8.0-foss-2023b x x x x x x x x x Armadillo/12.6.2-foss-2023a x x x x x x x x x Armadillo/11.4.3-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Arrow/","title":"Arrow","text":"

Apache Arrow (incl. PyArrow Python bindings), a cross-language development platform for in-memory data.

https://arrow.apache.org

"},{"location":"available_software/detail/Arrow/#available-modules","title":"Available modules","text":"

The overview below shows which Arrow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Arrow, load one of these modules using a module load command like:

module load Arrow/16.1.0-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Arrow/16.1.0-gfbf-2023b x x x x x x x x x Arrow/14.0.1-gfbf-2023a x x x x x x x x x Arrow/11.0.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/Arrow/#arrow1610-gfbf-2023b","title":"Arrow/16.1.0-gfbf-2023b","text":"

This is a list of extensions included in the module:

pyarrow-16.1.0

"},{"location":"available_software/detail/Arrow/#arrow1401-gfbf-2023a","title":"Arrow/14.0.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

pyarrow-14.0.1

"},{"location":"available_software/detail/BCFtools/","title":"BCFtools","text":"

Samtools is a suite of programs for interacting with high-throughput sequencing data. BCFtools - Reading/writing BCF2/VCF/gVCF files and calling/filtering/summarising SNP and short indel sequence variants

https://www.htslib.org/

"},{"location":"available_software/detail/BCFtools/#available-modules","title":"Available modules","text":"

The overview below shows which BCFtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BCFtools, load one of these modules using a module load command like:

module load BCFtools/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BCFtools/1.18-GCC-12.3.0 x x x x x x x x x BCFtools/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BLAST%2B/","title":"BLAST+","text":"

Basic Local Alignment Search Tool, or BLAST, is an algorithm for comparing primary biological sequence information, such as the amino-acid sequences of different proteins or the nucleotides of DNA sequences.

https://blast.ncbi.nlm.nih.gov/

"},{"location":"available_software/detail/BLAST%2B/#available-modules","title":"Available modules","text":"

The overview below shows which BLAST+ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BLAST+, load one of these modules using a module load command like:

module load BLAST+/2.14.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLAST+/2.14.1-gompi-2023a x x x x x x x x x BLAST+/2.14.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/BLIS/","title":"BLIS","text":"

BLIS is a portable software framework for instantiating high-performanceBLAS-like dense linear algebra libraries.

https://github.com/flame/blis/

"},{"location":"available_software/detail/BLIS/#available-modules","title":"Available modules","text":"

The overview below shows which BLIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BLIS, load one of these modules using a module load command like:

module load BLIS/0.9.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BLIS/0.9.0-GCC-13.2.0 x x x x x x x x x BLIS/0.9.0-GCC-12.3.0 x x x x x x x x x BLIS/0.9.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BWA/","title":"BWA","text":"

Burrows-Wheeler Aligner (BWA) is an efficient program that aligns relatively short nucleotide sequences against a long reference sequence such as the human genome.

http://bio-bwa.sourceforge.net/

"},{"location":"available_software/detail/BWA/#available-modules","title":"Available modules","text":"

The overview below shows which BWA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BWA, load one of these modules using a module load command like:

module load BWA/0.7.18-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BWA/0.7.18-GCCcore-12.3.0 x x x x x x x x x BWA/0.7.17-20220923-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BamTools/","title":"BamTools","text":"

BamTools provides both a programmer's API and an end-user's toolkit for handling BAM files.

https://github.com/pezmaster31/bamtools

"},{"location":"available_software/detail/BamTools/#available-modules","title":"Available modules","text":"

The overview below shows which BamTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BamTools, load one of these modules using a module load command like:

module load BamTools/2.5.2-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BamTools/2.5.2-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bazel/","title":"Bazel","text":"

Bazel is a build tool that builds code quickly and reliably.It is used to build the majority of Google's software.

https://bazel.io/

"},{"location":"available_software/detail/Bazel/#available-modules","title":"Available modules","text":"

The overview below shows which Bazel installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bazel, load one of these modules using a module load command like:

module load Bazel/6.3.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bazel/6.3.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/","title":"BeautifulSoup","text":"

Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping.

https://www.crummy.com/software/BeautifulSoup

"},{"location":"available_software/detail/BeautifulSoup/#available-modules","title":"Available modules","text":"

The overview below shows which BeautifulSoup installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BeautifulSoup, load one of these modules using a module load command like:

module load BeautifulSoup/4.12.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BeautifulSoup/4.12.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/BeautifulSoup/#beautifulsoup4122-gcccore-1230","title":"BeautifulSoup/4.12.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

BeautifulSoup-4.12.2, soupsieve-2.4.1

"},{"location":"available_software/detail/Bio-DB-HTS/","title":"Bio-DB-HTS","text":"

Read files using HTSlib including BAM/CRAM, Tabix and BCF database files

https://metacpan.org/release/Bio-DB-HTS

"},{"location":"available_software/detail/Bio-DB-HTS/#available-modules","title":"Available modules","text":"

The overview below shows which Bio-DB-HTS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bio-DB-HTS, load one of these modules using a module load command like:

module load Bio-DB-HTS/3.01-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bio-DB-HTS/3.01-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bio-SearchIO-hmmer/","title":"Bio-SearchIO-hmmer","text":"

Code to parse output from hmmsearch, hmmscan, phmmer and nhmmer, compatiblewith both version 2 and version 3 of the HMMER package from http://hmmer.org.

https://metacpan.org/pod/Bio::SearchIO::hmmer3

"},{"location":"available_software/detail/Bio-SearchIO-hmmer/#available-modules","title":"Available modules","text":"

The overview below shows which Bio-SearchIO-hmmer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bio-SearchIO-hmmer, load one of these modules using a module load command like:

module load Bio-SearchIO-hmmer/1.7.3-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bio-SearchIO-hmmer/1.7.3-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BioPerl/","title":"BioPerl","text":"

Bioperl is the product of a community effort to produce Perl code which is useful in biology. Examples include Sequence objects, Alignment objects and database searching objects.

https://bioperl.org/

"},{"location":"available_software/detail/BioPerl/#available-modules","title":"Available modules","text":"

The overview below shows which BioPerl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using BioPerl, load one of these modules using a module load command like:

module load BioPerl/1.7.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 BioPerl/1.7.8-GCCcore-12.3.0 x x x x x x x x x BioPerl/1.7.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/BioPerl/#bioperl178-gcccore-1230","title":"BioPerl/1.7.8-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Bio::Procedural-1.7.4, BioPerl-1.7.8, XML::Writer-0.900

"},{"location":"available_software/detail/BioPerl/#bioperl178-gcccore-1220","title":"BioPerl/1.7.8-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Bio::Procedural-1.7.4, BioPerl-1.7.8, XML::Writer-0.900

"},{"location":"available_software/detail/Biopython/","title":"Biopython","text":"

Biopython is a set of freely available tools for biological computation written in Python by an international team of developers. It is a distributed collaborative effort to develop Python libraries and applications which address the needs of current and future work in bioinformatics.

https://www.biopython.org

"},{"location":"available_software/detail/Biopython/#available-modules","title":"Available modules","text":"

The overview below shows which Biopython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Biopython, load one of these modules using a module load command like:

module load Biopython/1.83-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Biopython/1.83-foss-2023a x x x x x x x x x Biopython/1.81-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Bison/","title":"Bison","text":"

Bison is a general-purpose parser generator that converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser employing LALR(1) parser tables.

https://www.gnu.org/software/bison

"},{"location":"available_software/detail/Bison/#available-modules","title":"Available modules","text":"

The overview below shows which Bison installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bison, load one of these modules using a module load command like:

module load Bison/3.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bison/3.8.2-GCCcore-13.2.0 x x x x x x x x x Bison/3.8.2-GCCcore-12.3.0 x x x x x x x x x Bison/3.8.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Boost.MPI/","title":"Boost.MPI","text":"

Boost provides free peer-reviewed portable C++ source libraries.

https://www.boost.org/

"},{"location":"available_software/detail/Boost.MPI/#available-modules","title":"Available modules","text":"

The overview below shows which Boost.MPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost.MPI, load one of these modules using a module load command like:

module load Boost.MPI/1.83.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.MPI/1.83.0-gompi-2023b x x x x x x x x x Boost.MPI/1.82.0-gompi-2023a x x x x x x x x x Boost.MPI/1.81.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/Boost.Python/","title":"Boost.Python","text":"

Boost.Python is a C++ library which enables seamless interoperability between C++ and the Python programming language.

https://boostorg.github.io/python

"},{"location":"available_software/detail/Boost.Python/#available-modules","title":"Available modules","text":"

The overview below shows which Boost.Python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost.Python, load one of these modules using a module load command like:

module load Boost.Python/1.83.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost.Python/1.83.0-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/Boost/","title":"Boost","text":"

Boost provides free peer-reviewed portable C++ source libraries.

https://www.boost.org/

"},{"location":"available_software/detail/Boost/#available-modules","title":"Available modules","text":"

The overview below shows which Boost installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Boost, load one of these modules using a module load command like:

module load Boost/1.83.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Boost/1.83.0-GCC-13.2.0 x x x x x x x x x Boost/1.82.0-GCC-12.3.0 x x x x x x x x x Boost/1.81.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Bowtie2/","title":"Bowtie2","text":"

Bowtie 2 is an ultrafast and memory-efficient tool for aligning sequencing reads to long reference sequences. It is particularly good at aligning reads of about 50 up to 100s or 1,000s of characters, and particularly good at aligning to relatively long (e.g. mammalian) genomes. Bowtie 2 indexes the genome with an FM Index to keep its memory footprint small: for the human genome, its memory footprint is typically around 3.2 GB. Bowtie 2 supports gapped, local, and paired-end alignment modes.

https://bowtie-bio.sourceforge.net/bowtie2/index.shtml

"},{"location":"available_software/detail/Bowtie2/#available-modules","title":"Available modules","text":"

The overview below shows which Bowtie2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Bowtie2, load one of these modules using a module load command like:

module load Bowtie2/2.5.1-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Bowtie2/2.5.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Brotli/","title":"Brotli","text":"

Brotli is a generic-purpose lossless compression algorithm that compresses data using a combination of a modern variant of the LZ77 algorithm, Huffman coding and 2nd order context modeling, with a compression ratio comparable to the best currently available general-purpose compression methods. It is similar in speed with deflate but offers more dense compression.The specification of the Brotli Compressed Data Format is defined in RFC 7932.

https://github.com/google/brotli

"},{"location":"available_software/detail/Brotli/#available-modules","title":"Available modules","text":"

The overview below shows which Brotli installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Brotli, load one of these modules using a module load command like:

module load Brotli/1.1.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brotli/1.1.0-GCCcore-13.2.0 x x x x x x x x x Brotli/1.0.9-GCCcore-12.3.0 x x x x x x x x x Brotli/1.0.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Brunsli/","title":"Brunsli","text":"

Brunsli is a lossless JPEG repacking library.

https://github.com/google/brunsli/

"},{"location":"available_software/detail/Brunsli/#available-modules","title":"Available modules","text":"

The overview below shows which Brunsli installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Brunsli, load one of these modules using a module load command like:

module load Brunsli/0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Brunsli/0.1-GCCcore-13.2.0 x x x x x x x x x Brunsli/0.1-GCCcore-12.3.0 x x x x x x x x x Brunsli/0.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CD-HIT/","title":"CD-HIT","text":"

CD-HIT is a very widely used program for clustering and comparing protein or nucleotide sequences.

http://weizhongli-lab.org/cd-hit/

"},{"location":"available_software/detail/CD-HIT/#available-modules","title":"Available modules","text":"

The overview below shows which CD-HIT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CD-HIT, load one of these modules using a module load command like:

module load CD-HIT/4.8.1-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CD-HIT/4.8.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CDO/","title":"CDO","text":"

CDO is a collection of command line Operators to manipulate and analyse Climate and NWP model Data.

https://code.zmaw.de/projects/cdo

"},{"location":"available_software/detail/CDO/#available-modules","title":"Available modules","text":"

The overview below shows which CDO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CDO, load one of these modules using a module load command like:

module load CDO/2.2.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CDO/2.2.2-gompi-2023b x x x x x x x x x CDO/2.2.2-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/CFITSIO/","title":"CFITSIO","text":"

CFITSIO is a library of C and Fortran subroutines for reading and writing data files inFITS (Flexible Image Transport System) data format.

https://heasarc.gsfc.nasa.gov/fitsio/

"},{"location":"available_software/detail/CFITSIO/#available-modules","title":"Available modules","text":"

The overview below shows which CFITSIO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CFITSIO, load one of these modules using a module load command like:

module load CFITSIO/4.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CFITSIO/4.3.1-GCCcore-13.2.0 x x x x x x x x x CFITSIO/4.3.0-GCCcore-12.3.0 x x x x x x x x x CFITSIO/4.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CGAL/","title":"CGAL","text":"

The goal of the CGAL Open Source Project is to provide easy access to efficient and reliable geometric algorithms in the form of a C++ library.

https://www.cgal.org/

"},{"location":"available_software/detail/CGAL/#available-modules","title":"Available modules","text":"

The overview below shows which CGAL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CGAL, load one of these modules using a module load command like:

module load CGAL/5.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CGAL/5.6-GCCcore-12.3.0 x x x x x x x x x CGAL/5.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CMake/","title":"CMake","text":"

CMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.

https://www.cmake.org

"},{"location":"available_software/detail/CMake/#available-modules","title":"Available modules","text":"

The overview below shows which CMake installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CMake, load one of these modules using a module load command like:

module load CMake/3.27.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CMake/3.27.6-GCCcore-13.2.0 x x x x x x x x x CMake/3.26.3-GCCcore-12.3.0 x x x x x x x x x CMake/3.24.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/CP2K/","title":"CP2K","text":"

CP2K is a freely available (GPL) program, written in Fortran 95, to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials.

https://www.cp2k.org/

"},{"location":"available_software/detail/CP2K/#available-modules","title":"Available modules","text":"

The overview below shows which CP2K installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CP2K, load one of these modules using a module load command like:

module load CP2K/2023.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CP2K/2023.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CUDA-Samples/","title":"CUDA-Samples","text":"

Samples for CUDA Developers which demonstrates features in CUDA Toolkit

https://github.com/NVIDIA/cuda-samples

"},{"location":"available_software/detail/CUDA-Samples/#available-modules","title":"Available modules","text":"

The overview below shows which CUDA-Samples installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CUDA-Samples, load one of these modules using a module load command like:

module load CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA-Samples/12.1-GCC-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/CUDA/","title":"CUDA","text":"

CUDA (formerly Compute Unified Device Architecture) is a parallel computing platform and programming model created by NVIDIA and implemented by the graphics processing units (GPUs) that they produce. CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs.

https://developer.nvidia.com/cuda-toolkit

"},{"location":"available_software/detail/CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CUDA, load one of these modules using a module load command like:

module load CUDA/12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CUDA/12.1.1 x x x x x x - x x"},{"location":"available_software/detail/CapnProto/","title":"CapnProto","text":"

Cap\u2019n Proto is an insanely fast data interchange format and capability-based RPC system.

https://capnproto.org

"},{"location":"available_software/detail/CapnProto/#available-modules","title":"Available modules","text":"

The overview below shows which CapnProto installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CapnProto, load one of these modules using a module load command like:

module load CapnProto/1.0.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CapnProto/1.0.1-GCCcore-12.3.0 x x x x x x x x x CapnProto/0.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Cartopy/","title":"Cartopy","text":"

Cartopy is a Python package designed to make drawing maps for data analysis and visualisation easy.

https://scitools.org.uk/cartopy/docs/latest/

"},{"location":"available_software/detail/Cartopy/#available-modules","title":"Available modules","text":"

The overview below shows which Cartopy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cartopy, load one of these modules using a module load command like:

module load Cartopy/0.22.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cartopy/0.22.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cartopy/#cartopy0220-foss-2023a","title":"Cartopy/0.22.0-foss-2023a","text":"

This is a list of extensions included in the module:

Cartopy-0.22.0, OWSLib-0.29.3, pyepsg-0.4.0, pykdtree-1.3.10, pyshp-2.3.1

"},{"location":"available_software/detail/Cassiopeia/","title":"Cassiopeia","text":"

A Package for Cas9-Enabled Single Cell Lineage Tracing Tree Reconstruction.

https://github.com/YosefLab/Cassiopeia

"},{"location":"available_software/detail/Cassiopeia/#available-modules","title":"Available modules","text":"

The overview below shows which Cassiopeia installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cassiopeia, load one of these modules using a module load command like:

module load Cassiopeia/2.0.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cassiopeia/2.0.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cassiopeia/#cassiopeia200-foss-2023a","title":"Cassiopeia/2.0.0-foss-2023a","text":"

This is a list of extensions included in the module:

bleach-6.1.0, Cassiopeia-2.0.0, comm-0.2.2, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.19.1, hits-0.4.0, ipywidgets-8.1.2, itolapi-4.1.4, jupyter_client-8.6.1, jupyter_core-5.7.2, jupyter_packaging-0.12.3, jupyterlab_pygments-0.3.0, jupyterlab_widgets-3.0.10, Levenshtein-0.22.0, mistune-3.0.2, nbclient-0.10.0, nbconvert-7.16.3, nbformat-5.10.3, ngs-tools-1.8.5, pandocfilters-1.5.1, python-Levenshtein-0.22.0, shortuuid-1.0.13, tinycss2-1.2.1, traitlets-5.14.2, widgetsnbextension-4.0.10

"},{"location":"available_software/detail/Catch2/","title":"Catch2","text":"

A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later

https://github.com/catchorg/Catch2

"},{"location":"available_software/detail/Catch2/#available-modules","title":"Available modules","text":"

The overview below shows which Catch2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Catch2, load one of these modules using a module load command like:

module load Catch2/2.13.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Catch2/2.13.9-GCCcore-13.2.0 x x x x x x x x x Catch2/2.13.9-GCCcore-12.3.0 x x x x x x x x x Catch2/2.13.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Cbc/","title":"Cbc","text":"

Cbc (Coin-or branch and cut) is an open-source mixed integer linear programmingsolver written in C++. It can be used as a callable library or using astand-alone executable.

https://github.com/coin-or/Cbc

"},{"location":"available_software/detail/Cbc/#available-modules","title":"Available modules","text":"

The overview below shows which Cbc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cbc, load one of these modules using a module load command like:

module load Cbc/2.10.11-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cbc/2.10.11-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Cgl/","title":"Cgl","text":"

The COIN-OR Cut Generation Library (Cgl) is a collection of cut generators thatcan be used with other COIN-OR packages that make use of cuts, such as, amongothers, the linear solver Clp or the mixed integer linear programming solversCbc or BCP. Cgl uses the abstract class OsiSolverInterface (see Osi) to use orcommunicate with a solver. It does not directly call a solver.

https://github.com/coin-or/Cgl

"},{"location":"available_software/detail/Cgl/#available-modules","title":"Available modules","text":"

The overview below shows which Cgl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cgl, load one of these modules using a module load command like:

module load Cgl/0.60.8-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cgl/0.60.8-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Clp/","title":"Clp","text":"

Clp (Coin-or linear programming) is an open-source linear programming solver.It is primarily meant to be used as a callable library, but a basic,stand-alone executable version is also available.

https://github.com/coin-or/Clp

"},{"location":"available_software/detail/Clp/#available-modules","title":"Available modules","text":"

The overview below shows which Clp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Clp, load one of these modules using a module load command like:

module load Clp/1.17.9-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Clp/1.17.9-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CoinUtils/","title":"CoinUtils","text":"

CoinUtils (Coin-OR Utilities) is an open-source collection of classes andfunctions that are generally useful to more than one COIN-OR project.

https://github.com/coin-or/CoinUtils

"},{"location":"available_software/detail/CoinUtils/#available-modules","title":"Available modules","text":"

The overview below shows which CoinUtils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CoinUtils, load one of these modules using a module load command like:

module load CoinUtils/2.11.10-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CoinUtils/2.11.10-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Critic2/","title":"Critic2","text":"

Critic2 is a program for the analysis of quantum mechanicalcalculation results in molecules and periodic solids.

https://aoterodelaroza.github.io/critic2/

"},{"location":"available_software/detail/Critic2/#available-modules","title":"Available modules","text":"

The overview below shows which Critic2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Critic2, load one of these modules using a module load command like:

module load Critic2/1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Critic2/1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/CubeLib/","title":"CubeLib","text":"

Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube general purpose C++ library component and command-line tools.

https://www.scalasca.org/software/cube-4.x/download.html

"},{"location":"available_software/detail/CubeLib/#available-modules","title":"Available modules","text":"

The overview below shows which CubeLib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CubeLib, load one of these modules using a module load command like:

module load CubeLib/4.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CubeLib/4.8.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/CubeWriter/","title":"CubeWriter","text":"

Cube, which is used as performance report explorer for Scalasca and Score-P, is a generic tool for displaying a multi-dimensional performance space consisting of the dimensions (i) performance metric, (ii) call path, and (iii) system resource. Each dimension can be represented as a tree, where non-leaf nodes of the tree can be collapsed or expanded to achieve the desired level of granularity. This module provides the Cube high-performance C writer library component.

https://www.scalasca.org/software/cube-4.x/download.html

"},{"location":"available_software/detail/CubeWriter/#available-modules","title":"Available modules","text":"

The overview below shows which CubeWriter installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using CubeWriter, load one of these modules using a module load command like:

module load CubeWriter/4.8.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 CubeWriter/4.8.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/Cython/","title":"Cython","text":"

Cython is an optimising static compiler for both the Python programminglanguage and the extended Cython programming language (based on Pyrex).

https://cython.org/

"},{"location":"available_software/detail/Cython/#available-modules","title":"Available modules","text":"

The overview below shows which Cython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Cython, load one of these modules using a module load command like:

module load Cython/3.0.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Cython/3.0.10-GCCcore-13.2.0 x x x x x x x x x Cython/3.0.8-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DB/","title":"DB","text":"

Berkeley DB enables the development of custom data management solutions, without the overhead traditionally associated with such custom projects.

https://www.oracle.com/technetwork/products/berkeleydb

"},{"location":"available_software/detail/DB/#available-modules","title":"Available modules","text":"

The overview below shows which DB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DB, load one of these modules using a module load command like:

module load DB/18.1.40-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB/18.1.40-GCCcore-12.3.0 x x x x x x x x x DB/18.1.40-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/DB_File/","title":"DB_File","text":"

Perl5 access to Berkeley DB version 1.x.

https://perldoc.perl.org/DB_File.html

"},{"location":"available_software/detail/DB_File/#available-modules","title":"Available modules","text":"

The overview below shows which DB_File installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DB_File, load one of these modules using a module load command like:

module load DB_File/1.859-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DB_File/1.859-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DIAMOND/","title":"DIAMOND","text":"

Accelerated BLAST compatible local sequence aligner

https://github.com/bbuchfink/diamond

"},{"location":"available_software/detail/DIAMOND/#available-modules","title":"Available modules","text":"

The overview below shows which DIAMOND installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DIAMOND, load one of these modules using a module load command like:

module load DIAMOND/2.1.8-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DIAMOND/2.1.8-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/DP3/","title":"DP3","text":"

DP3: streaming processing pipeline for radio interferometric data.

https://dp3.readthedocs.io/

"},{"location":"available_software/detail/DP3/#available-modules","title":"Available modules","text":"

The overview below shows which DP3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DP3, load one of these modules using a module load command like:

module load DP3/6.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DP3/6.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/DendroPy/","title":"DendroPy","text":"

A Python library for phylogenetics and phylogenetic computing: reading, writing, simulation, processing and manipulation of phylogenetic trees (phylogenies) and characters.

https://dendropy.org/

"},{"location":"available_software/detail/DendroPy/#available-modules","title":"Available modules","text":"

The overview below shows which DendroPy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using DendroPy, load one of these modules using a module load command like:

module load DendroPy/4.6.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 DendroPy/4.6.1-GCCcore-12.3.0 x x x x x x x x x DendroPy/4.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Doxygen/","title":"Doxygen","text":"

Doxygen is a documentation system for C++, C, Java, Objective-C, Python, IDL (Corba and Microsoft flavors), Fortran, VHDL, PHP, C#, and to some extent D.

https://www.doxygen.org

"},{"location":"available_software/detail/Doxygen/#available-modules","title":"Available modules","text":"

The overview below shows which Doxygen installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Doxygen, load one of these modules using a module load command like:

module load Doxygen/1.9.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Doxygen/1.9.8-GCCcore-13.2.0 x x x x x x x x x Doxygen/1.9.7-GCCcore-12.3.0 x x x x x x x x x Doxygen/1.9.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/EESSI-extend/","title":"EESSI-extend","text":"

The goal of the European Environment for Scientific Software Installations (EESSI, pronounced as \"easy\") is to build a common stack of scientific software installations for HPC systems and beyond, including laptops, personal workstations and cloud infrastructure. This module allows you to extend EESSI using the same configuration for EasyBuild as EESSI itself uses. A number of environment variables control the behaviour of the module: - EESSI_USER_INSTALL can be set to a location to install modules for use by the user only. The location must already exist on the filesystem. - EESSI_PROJECT_INSTALL can be set to a location to install modules for use by a project. The location must already exist on the filesystem and you should ensure that the location has the correct Linux group and the SGID permission is set on that directory (chmod g+s $EESSI_PROJECT_INSTALL) so that all members of the group have permission to read and write installations. - EESSI_SITE_INSTALL is either defined or not and cannot be used with another environment variable. A site installation is done in a defined location and any installations there are (by default) world readable. - EESSI_CVMFS_INSTALL is either defined or not and cannot be used with another environment variable. A CVMFS installation targets a defined location which will be ingested into CVMFS and is only useful for CVMFS administrators. - If none of the environment variables above are defined, an EESSI_USER_INSTALL is assumed with a value of $HOME/EESSI If both EESSI_USER_INSTALL and EESSI_PROJECT_INSTALL are defined, both sets of installations are exposed, but new installations are created as user installations.

https://eessi.io/docs/

"},{"location":"available_software/detail/EESSI-extend/#available-modules","title":"Available modules","text":"

The overview below shows which EESSI-extend installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EESSI-extend, load one of these modules using a module load command like:

module load EESSI-extend/2023.06-easybuild\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EESSI-extend/2023.06-easybuild x x x x x x x x x"},{"location":"available_software/detail/ELPA/","title":"ELPA","text":"

Eigenvalue SoLvers for Petaflop-Applications.

https://elpa.mpcdf.mpg.de/

"},{"location":"available_software/detail/ELPA/#available-modules","title":"Available modules","text":"

The overview below shows which ELPA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ELPA, load one of these modules using a module load command like:

module load ELPA/2023.05.001-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ELPA/2023.05.001-foss-2023a x x x x x x x x x ELPA/2022.05.001-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/ESPResSo/","title":"ESPResSo","text":"

A software package for performing and analyzing scientific Molecular Dynamics simulations.

https://espressomd.org/wordpress

"},{"location":"available_software/detail/ESPResSo/#available-modules","title":"Available modules","text":"

The overview below shows which ESPResSo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ESPResSo, load one of these modules using a module load command like:

module load ESPResSo/4.2.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ESPResSo/4.2.2-foss-2023b x x x x x x x x x ESPResSo/4.2.2-foss-2023a x x x x x x x x x ESPResSo/4.2.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/ETE/","title":"ETE","text":"

A Python framework for the analysis and visualization of trees

http://etetoolkit.org

"},{"location":"available_software/detail/ETE/#available-modules","title":"Available modules","text":"

The overview below shows which ETE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ETE, load one of these modules using a module load command like:

module load ETE/3.1.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ETE/3.1.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/EasyBuild/","title":"EasyBuild","text":"

EasyBuild is a software build and installation framework written in Python that allows you to install software in a structured, repeatable and robust way.

https://easybuilders.github.io/easybuild

"},{"location":"available_software/detail/EasyBuild/#available-modules","title":"Available modules","text":"

The overview below shows which EasyBuild installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EasyBuild, load one of these modules using a module load command like:

module load EasyBuild/4.9.4\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EasyBuild/4.9.4 x x x x x x x x x EasyBuild/4.9.3 x x x x x x x x x EasyBuild/4.9.2 x x x x x x x x x EasyBuild/4.9.1 x x x x x x x x x EasyBuild/4.9.0 x x x x x x x x x EasyBuild/4.8.2 x x x x x x x x x"},{"location":"available_software/detail/Eigen/","title":"Eigen","text":"

Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.

https://eigen.tuxfamily.org

"},{"location":"available_software/detail/Eigen/#available-modules","title":"Available modules","text":"

The overview below shows which Eigen installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Eigen, load one of these modules using a module load command like:

module load Eigen/3.4.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Eigen/3.4.0-GCCcore-13.2.0 x x x x x x x x x Eigen/3.4.0-GCCcore-12.3.0 x x x x x x x x x Eigen/3.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/EveryBeam/","title":"EveryBeam","text":"

Library that provides the antenna response pattern for several instruments,such as LOFAR (and LOBES), SKA (OSKAR), MWA, JVLA, etc.

https://everybeam.readthedocs.io/

"},{"location":"available_software/detail/EveryBeam/#available-modules","title":"Available modules","text":"

The overview below shows which EveryBeam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using EveryBeam, load one of these modules using a module load command like:

module load EveryBeam/0.5.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 EveryBeam/0.5.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/Extrae/","title":"Extrae","text":"

Extrae is the package devoted to generate Paraver trace-files for a post-mortem analysis.Extrae is a tool that uses different interposition mechanisms to inject probes into the target applicationso as to gather information regarding the application performance.

https://tools.bsc.es/extrae

"},{"location":"available_software/detail/Extrae/#available-modules","title":"Available modules","text":"

The overview below shows which Extrae installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Extrae, load one of these modules using a module load command like:

module load Extrae/4.2.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Extrae/4.2.0-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/FFTW.MPI/","title":"FFTW.MPI","text":"

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.

https://www.fftw.org

"},{"location":"available_software/detail/FFTW.MPI/#available-modules","title":"Available modules","text":"

The overview below shows which FFTW.MPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFTW.MPI, load one of these modules using a module load command like:

module load FFTW.MPI/3.3.10-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW.MPI/3.3.10-gompi-2023b x x x x x x x x x FFTW.MPI/3.3.10-gompi-2023a x x x x x x x x x FFTW.MPI/3.3.10-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/FFTW/","title":"FFTW","text":"

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT)in one or more dimensions, of arbitrary input size, and of both real and complex data.

https://www.fftw.org

"},{"location":"available_software/detail/FFTW/#available-modules","title":"Available modules","text":"

The overview below shows which FFTW installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFTW, load one of these modules using a module load command like:

module load FFTW/3.3.10-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFTW/3.3.10-GCC-13.2.0 x x x x x x x x x FFTW/3.3.10-GCC-12.3.0 x x x x x x x x x FFTW/3.3.10-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FFmpeg/","title":"FFmpeg","text":"

A complete, cross-platform solution to record, convert and stream audio and video.

https://www.ffmpeg.org/

"},{"location":"available_software/detail/FFmpeg/#available-modules","title":"Available modules","text":"

The overview below shows which FFmpeg installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FFmpeg, load one of these modules using a module load command like:

module load FFmpeg/6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FFmpeg/6.0-GCCcore-13.2.0 x x x x x x x x x FFmpeg/6.0-GCCcore-12.3.0 x x x x x x x x x FFmpeg/5.1.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FLAC/","title":"FLAC","text":"

FLAC stands for Free Lossless Audio Codec, an audio format similar to MP3, but lossless, meaningthat audio is compressed in FLAC without any loss in quality.

https://xiph.org/flac/

"},{"location":"available_software/detail/FLAC/#available-modules","title":"Available modules","text":"

The overview below shows which FLAC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FLAC, load one of these modules using a module load command like:

module load FLAC/1.4.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FLAC/1.4.3-GCCcore-13.2.0 x x x x x x x x x FLAC/1.4.2-GCCcore-12.3.0 x x x x x x x x x FLAC/1.4.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FLTK/","title":"FLTK","text":"

FLTK is a cross-platform C++ GUI toolkit for UNIX/Linux (X11), Microsoft Windows, and MacOS X. FLTK provides modern GUI functionality without the bloat and supports 3D graphics via OpenGL and its built-in GLUT emulation.

https://www.fltk.org

"},{"location":"available_software/detail/FLTK/#available-modules","title":"Available modules","text":"

The overview below shows which FLTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FLTK, load one of these modules using a module load command like:

module load FLTK/1.3.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FLTK/1.3.8-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FastME/","title":"FastME","text":"

FastME: a comprehensive, accurate and fast distance-based phylogeny inference program.

http://www.atgc-montpellier.fr/fastme/

"},{"location":"available_software/detail/FastME/#available-modules","title":"Available modules","text":"

The overview below shows which FastME installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FastME, load one of these modules using a module load command like:

module load FastME/2.1.6.3-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FastME/2.1.6.3-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Fiona/","title":"Fiona","text":"

Fiona is designed to be simple and dependable. It focuses on reading and writing datain standard Python IO style and relies upon familiar Python types and protocols such as files, dictionaries,mappings, and iterators instead of classes specific to OGR. Fiona can read and write real-world data usingmulti-layered GIS formats and zipped virtual file systems and integrates readily with other Python GISpackages such as pyproj, Rtree, and Shapely.

https://github.com/Toblerity/Fiona

"},{"location":"available_software/detail/Fiona/#available-modules","title":"Available modules","text":"

The overview below shows which Fiona installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Fiona, load one of these modules using a module load command like:

module load Fiona/1.9.5-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Fiona/1.9.5-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Fiona/#fiona195-foss-2023a","title":"Fiona/1.9.5-foss-2023a","text":"

This is a list of extensions included in the module:

click-plugins-1.1.1, cligj-0.7.2, fiona-1.9.5, munch-4.0.0

"},{"location":"available_software/detail/Flask/","title":"Flask","text":"

Flask is a lightweight WSGI web application framework. It is designed to makegetting started quick and easy, with the ability to scale up to complexapplications.This module includes the Flask extensions: Flask-Cors

https://www.palletsprojects.com/p/flask/

"},{"location":"available_software/detail/Flask/#available-modules","title":"Available modules","text":"

The overview below shows which Flask installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Flask, load one of these modules using a module load command like:

module load Flask/2.2.3-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Flask/2.2.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Flask/#flask223-gcccore-1220","title":"Flask/2.2.3-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

asgiref-3.6.0, cachelib-0.10.2, Flask-2.2.3, Flask-Cors-3.0.10, Flask-Session-0.4.0, itsdangerous-2.1.2, Werkzeug-2.2.3

"},{"location":"available_software/detail/FlexiBLAS/","title":"FlexiBLAS","text":"

FlexiBLAS is a wrapper library that enables the exchange of the BLAS and LAPACK implementationused by a program without recompiling or relinking it.

https://gitlab.mpi-magdeburg.mpg.de/software/flexiblas-release

"},{"location":"available_software/detail/FlexiBLAS/#available-modules","title":"Available modules","text":"

The overview below shows which FlexiBLAS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FlexiBLAS, load one of these modules using a module load command like:

module load FlexiBLAS/3.3.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FlexiBLAS/3.3.1-GCC-13.2.0 x x x x x x x x x FlexiBLAS/3.3.1-GCC-12.3.0 x x x x x x x x x FlexiBLAS/3.2.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/FragGeneScan/","title":"FragGeneScan","text":"

FragGeneScan is an application for finding (fragmented) genes in short reads.

https://omics.informatics.indiana.edu/FragGeneScan/

"},{"location":"available_software/detail/FragGeneScan/#available-modules","title":"Available modules","text":"

The overview below shows which FragGeneScan installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FragGeneScan, load one of these modules using a module load command like:

module load FragGeneScan/1.31-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FragGeneScan/1.31-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FreeImage/","title":"FreeImage","text":"

FreeImage is an Open Source library project for developers who would like to support popular graphicsimage formats like PNG, BMP, JPEG, TIFF and others as needed by today's multimedia applications. FreeImage is easy touse, fast, multithreading safe.

http://freeimage.sourceforge.net

"},{"location":"available_software/detail/FreeImage/#available-modules","title":"Available modules","text":"

The overview below shows which FreeImage installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FreeImage, load one of these modules using a module load command like:

module load FreeImage/3.18.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FreeImage/3.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/FriBidi/","title":"FriBidi","text":"

The Free Implementation of the Unicode Bidirectional Algorithm.

https://github.com/fribidi/fribidi

"},{"location":"available_software/detail/FriBidi/#available-modules","title":"Available modules","text":"

The overview below shows which FriBidi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using FriBidi, load one of these modules using a module load command like:

module load FriBidi/1.0.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 FriBidi/1.0.13-GCCcore-13.2.0 x x x x x x x x x FriBidi/1.0.12-GCCcore-12.3.0 x x x x x x x x x FriBidi/1.0.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GATK/","title":"GATK","text":"

The Genome Analysis Toolkit or GATK is a software package developed at the Broad Institute to analyse next-generation resequencing data. The toolkit offers a wide variety of tools, with a primary focus on variant discovery and genotyping as well as strong emphasis on data quality assurance. Its robust architecture, powerful processing engine and high-performance computing features make it capable of taking on projects of any size.

https://www.broadinstitute.org/gatk/

"},{"location":"available_software/detail/GATK/#available-modules","title":"Available modules","text":"

The overview below shows which GATK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GATK, load one of these modules using a module load command like:

module load GATK/4.5.0.0-GCCcore-12.3.0-Java-17\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GATK/4.5.0.0-GCCcore-12.3.0-Java-17 x x x x x x x x x"},{"location":"available_software/detail/GCC/","title":"GCC","text":"

The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).

https://gcc.gnu.org/

"},{"location":"available_software/detail/GCC/#available-modules","title":"Available modules","text":"

The overview below shows which GCC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GCC, load one of these modules using a module load command like:

module load GCC/13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCC/13.2.0 x x x x x x x x x GCC/12.3.0 x x x x x x x x x GCC/12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GCCcore/","title":"GCCcore","text":"

The GNU Compiler Collection includes front ends for C, C++, Objective-C, Fortran, Java, and Ada, as well as libraries for these languages (libstdc++, libgcj,...).

https://gcc.gnu.org/

"},{"location":"available_software/detail/GCCcore/#available-modules","title":"Available modules","text":"

The overview below shows which GCCcore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GCCcore, load one of these modules using a module load command like:

module load GCCcore/13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GCCcore/13.2.0 x x x x x x x x x GCCcore/12.3.0 x x x x x x x x x GCCcore/12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GDAL/","title":"GDAL","text":"

GDAL is a translator library for raster geospatial data formats that is released under an X/MIT style Open Source license by the Open Source Geospatial Foundation. As a library, it presents a single abstract data model to the calling application for all supported formats. It also comes with a variety of useful commandline utilities for data translation and processing.

https://www.gdal.org

"},{"location":"available_software/detail/GDAL/#available-modules","title":"Available modules","text":"

The overview below shows which GDAL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDAL, load one of these modules using a module load command like:

module load GDAL/3.9.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDAL/3.9.0-foss-2023b x x x x x x x x x GDAL/3.7.1-foss-2023a x x x x x x x x x GDAL/3.6.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/GDB/","title":"GDB","text":"

The GNU Project Debugger

https://www.gnu.org/software/gdb/gdb.html

"},{"location":"available_software/detail/GDB/#available-modules","title":"Available modules","text":"

The overview below shows which GDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDB, load one of these modules using a module load command like:

module load GDB/13.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDB/13.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/GDRCopy/","title":"GDRCopy","text":"

A low-latency GPU memory copy library based on NVIDIA GPUDirect RDMA technology.

https://github.com/NVIDIA/gdrcopy

"},{"location":"available_software/detail/GDRCopy/#available-modules","title":"Available modules","text":"

The overview below shows which GDRCopy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GDRCopy, load one of these modules using a module load command like:

module load GDRCopy/2.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GDRCopy/2.4-GCCcore-13.2.0 x x x x x x x x x GDRCopy/2.3.1-GCCcore-12.3.0 x x x x x x - x x"},{"location":"available_software/detail/GEOS/","title":"GEOS","text":"

GEOS (Geometry Engine - Open Source) is a C++ port of the Java Topology Suite (JTS)

https://trac.osgeo.org/geos

"},{"location":"available_software/detail/GEOS/#available-modules","title":"Available modules","text":"

The overview below shows which GEOS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GEOS, load one of these modules using a module load command like:

module load GEOS/3.12.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GEOS/3.12.1-GCC-13.2.0 x x x x x x x x x GEOS/3.12.0-GCC-12.3.0 x x x x x x x x x GEOS/3.11.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GL2PS/","title":"GL2PS","text":"

GL2PS: an OpenGL to PostScript printing library

https://www.geuz.org/gl2ps/

"},{"location":"available_software/detail/GL2PS/#available-modules","title":"Available modules","text":"

The overview below shows which GL2PS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GL2PS, load one of these modules using a module load command like:

module load GL2PS/1.4.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GL2PS/1.4.2-GCCcore-12.3.0 x x x x x x x x x GL2PS/1.4.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GLPK/","title":"GLPK","text":"

The GLPK (GNU Linear Programming Kit) package is intended for solving large-scale linear programming (LP), mixed integer programming (MIP), and other related problems. It is a set of routines written in ANSI C and organized in the form of a callable library.

https://www.gnu.org/software/glpk/

"},{"location":"available_software/detail/GLPK/#available-modules","title":"Available modules","text":"

The overview below shows which GLPK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GLPK, load one of these modules using a module load command like:

module load GLPK/5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLPK/5.0-GCCcore-13.2.0 x x x x x x x x x GLPK/5.0-GCCcore-12.3.0 x x x x x x x x x GLPK/5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GLib/","title":"GLib","text":"

GLib is one of the base libraries of the GTK+ project

https://www.gtk.org/

"},{"location":"available_software/detail/GLib/#available-modules","title":"Available modules","text":"

The overview below shows which GLib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GLib, load one of these modules using a module load command like:

module load GLib/2.78.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GLib/2.78.1-GCCcore-13.2.0 x x x x x x x x x GLib/2.77.1-GCCcore-12.3.0 x x x x x x x x x GLib/2.75.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GMP/","title":"GMP","text":"

GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers.

https://gmplib.org/

"},{"location":"available_software/detail/GMP/#available-modules","title":"Available modules","text":"

The overview below shows which GMP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GMP, load one of these modules using a module load command like:

module load GMP/6.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GMP/6.3.0-GCCcore-13.2.0 x x x x x x x x x GMP/6.2.1-GCCcore-12.3.0 x x x x x x x x x GMP/6.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GObject-Introspection/","title":"GObject-Introspection","text":"

GObject introspection is a middleware layer between C libraries (using GObject) and language bindings. The C library can be scanned at compile time and generate a metadata file, in addition to the actual native C library. Then at runtime, language bindings can read this metadata and automatically provide bindings to call into the C library.

https://gi.readthedocs.io/en/latest/

"},{"location":"available_software/detail/GObject-Introspection/#available-modules","title":"Available modules","text":"

The overview below shows which GObject-Introspection installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GObject-Introspection, load one of these modules using a module load command like:

module load GObject-Introspection/1.78.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GObject-Introspection/1.78.1-GCCcore-13.2.0 x x x x x x x x x GObject-Introspection/1.76.1-GCCcore-12.3.0 x x x x x x x x x GObject-Introspection/1.74.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GROMACS/","title":"GROMACS","text":"

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate theNewtonian equations of motion for systems with hundreds to millions ofparticles.This is a CPU only build, containing both MPI and threadMPI binariesfor both single and double precision.It also contains the gmxapi extension for the single precision MPI build.

https://www.gromacs.org

"},{"location":"available_software/detail/GROMACS/#available-modules","title":"Available modules","text":"

The overview below shows which GROMACS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GROMACS, load one of these modules using a module load command like:

module load GROMACS/2024.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GROMACS/2024.4-foss-2023b x x x x x x x x x GROMACS/2024.3-foss-2023b x x x x x x x x x GROMACS/2024.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/GROMACS/#gromacs20244-foss-2023b","title":"GROMACS/2024.4-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.4.2

"},{"location":"available_software/detail/GROMACS/#gromacs20243-foss-2023b","title":"GROMACS/2024.3-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.4.2

"},{"location":"available_software/detail/GROMACS/#gromacs20241-foss-2023b","title":"GROMACS/2024.1-foss-2023b","text":"

This is a list of extensions included in the module:

gmxapi-0.5.0

"},{"location":"available_software/detail/GSL/","title":"GSL","text":"

The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers. The library provides a wide range of mathematical routines such as random number generators, special functions and least-squares fitting.

https://www.gnu.org/software/gsl/

"},{"location":"available_software/detail/GSL/#available-modules","title":"Available modules","text":"

The overview below shows which GSL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GSL, load one of these modules using a module load command like:

module load GSL/2.7-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GSL/2.7-GCC-13.2.0 x x x x x x x x x GSL/2.7-GCC-12.3.0 x x x x x x x x x GSL/2.7-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GST-plugins-base/","title":"GST-plugins-base","text":"

GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.

https://gstreamer.freedesktop.org/

"},{"location":"available_software/detail/GST-plugins-base/#available-modules","title":"Available modules","text":"

The overview below shows which GST-plugins-base installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GST-plugins-base, load one of these modules using a module load command like:

module load GST-plugins-base/1.24.8-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GST-plugins-base/1.24.8-GCC-13.2.0 x x x x x x x x x GST-plugins-base/1.22.5-GCC-12.3.0 x x x x x x x x x GST-plugins-base/1.22.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GStreamer/","title":"GStreamer","text":"

GStreamer is a library for constructing graphs of media-handling components. The applications it supports range from simple Ogg/Vorbis playback, audio/video streaming to complex audio (mixing) and video (non-linear editing) processing.

https://gstreamer.freedesktop.org/

"},{"location":"available_software/detail/GStreamer/#available-modules","title":"Available modules","text":"

The overview below shows which GStreamer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GStreamer, load one of these modules using a module load command like:

module load GStreamer/1.24.8-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GStreamer/1.24.8-GCC-13.2.0 x x x x x x x x x GStreamer/1.22.5-GCC-12.3.0 x x x x x x x x x GStreamer/1.22.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GTK3/","title":"GTK3","text":"

GTK+ is the primary library used to construct user interfaces in GNOME. It provides all the user interface controls, or widgets, used in a common graphical application. Its object-oriented API allows you to construct user interfaces without dealing with the low-level details of drawing and device interaction.

https://developer.gnome.org/gtk3/stable/

"},{"location":"available_software/detail/GTK3/#available-modules","title":"Available modules","text":"

The overview below shows which GTK3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GTK3, load one of these modules using a module load command like:

module load GTK3/3.24.39-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GTK3/3.24.39-GCCcore-13.2.0 x x x x x x x x x GTK3/3.24.37-GCCcore-12.3.0 x x x x x x x x x GTK3/3.24.35-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Gdk-Pixbuf/","title":"Gdk-Pixbuf","text":"

The Gdk Pixbuf is a toolkit for image loading and pixel buffer manipulation. It is used by GTK+ 2 and GTK+ 3 to load and manipulate images. In the past it was distributed as part of GTK+ 2 but it was split off into a separate package in preparation for the change to GTK+ 3.

https://docs.gtk.org/gdk-pixbuf/

"},{"location":"available_software/detail/Gdk-Pixbuf/#available-modules","title":"Available modules","text":"

The overview below shows which Gdk-Pixbuf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Gdk-Pixbuf, load one of these modules using a module load command like:

module load Gdk-Pixbuf/2.42.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Gdk-Pixbuf/2.42.10-GCCcore-13.2.0 x x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.3.0 x x x x x x x x x Gdk-Pixbuf/2.42.10-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GenomeTools/","title":"GenomeTools","text":"

A comprehensive software library for efficient processing of structured genome annotations.

http://genometools.org

"},{"location":"available_software/detail/GenomeTools/#available-modules","title":"Available modules","text":"

The overview below shows which GenomeTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GenomeTools, load one of these modules using a module load command like:

module load GenomeTools/1.6.2-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GenomeTools/1.6.2-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Ghostscript/","title":"Ghostscript","text":"

Ghostscript is a versatile processor for PostScript data with the ability to render PostScript to different targets. It used to be part of the cups printing stack, but is no longer used for that.

https://ghostscript.com

"},{"location":"available_software/detail/Ghostscript/#available-modules","title":"Available modules","text":"

The overview below shows which Ghostscript installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ghostscript, load one of these modules using a module load command like:

module load Ghostscript/10.02.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ghostscript/10.02.1-GCCcore-13.2.0 x x x x x x x x x Ghostscript/10.01.2-GCCcore-12.3.0 x x x x x x x x x Ghostscript/10.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/GitPython/","title":"GitPython","text":"

GitPython is a python library used to interact with Git repositories

https://gitpython.readthedocs.org

"},{"location":"available_software/detail/GitPython/#available-modules","title":"Available modules","text":"

The overview below shows which GitPython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using GitPython, load one of these modules using a module load command like:

module load GitPython/3.1.40-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 GitPython/3.1.40-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/GitPython/#gitpython3140-gcccore-1230","title":"GitPython/3.1.40-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

gitdb-4.0.11, GitPython-3.1.40, smmap-5.0.1

"},{"location":"available_software/detail/Graphene/","title":"Graphene","text":"

Graphene is a thin layer of types for graphic libraries

https://ebassi.github.io/graphene/

"},{"location":"available_software/detail/Graphene/#available-modules","title":"Available modules","text":"

The overview below shows which Graphene installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Graphene, load one of these modules using a module load command like:

module load Graphene/1.10.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Graphene/1.10.8-GCCcore-13.2.0 x x x x x x x x x Graphene/1.10.8-GCCcore-12.3.0 x x x x x x x x x Graphene/1.10.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HDBSCAN/","title":"HDBSCAN","text":"

The hdbscan library is a suite of tools to use unsupervised learning to find clusters, or dense regions, of a dataset. The primary algorithm is HDBSCAN* as proposed by Campello, Moulavi, and Sander. The library provides a high performance implementation of this algorithm, along with tools for analysing the resulting clustering.

http://hdbscan.readthedocs.io/en/latest/

"},{"location":"available_software/detail/HDBSCAN/#available-modules","title":"Available modules","text":"

The overview below shows which HDBSCAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDBSCAN, load one of these modules using a module load command like:

module load HDBSCAN/0.8.38.post1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDBSCAN/0.8.38.post1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/HDF/","title":"HDF","text":"

HDF (also known as HDF4) is a library and multi-object file format for storing and managing data between machines.

https://www.hdfgroup.org/products/hdf4/

"},{"location":"available_software/detail/HDF/#available-modules","title":"Available modules","text":"

The overview below shows which HDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDF, load one of these modules using a module load command like:

module load HDF/4.2.16-2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF/4.2.16-2-GCCcore-13.2.0 x x x x x x x x x HDF/4.2.16-2-GCCcore-12.3.0 x x x x x x x x x HDF/4.2.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HDF5/","title":"HDF5","text":"

HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data.

https://portal.hdfgroup.org/display/support

"},{"location":"available_software/detail/HDF5/#available-modules","title":"Available modules","text":"

The overview below shows which HDF5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HDF5, load one of these modules using a module load command like:

module load HDF5/1.14.3-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HDF5/1.14.3-gompi-2023b x x x x x x x x x HDF5/1.14.0-gompi-2023a x x x x x x x x x HDF5/1.14.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/HMMER/","title":"HMMER","text":"

HMMER is used for searching sequence databases for homologs of protein sequences, and for making protein sequence alignments. It implements methods using probabilistic models called profile hidden Markov models (profile HMMs). Compared to BLAST, FASTA, and other sequence alignment and database search tools based on older scoring methodology, HMMER aims to be significantly more accurate and more able to detect remote homologs because of the strength of its underlying mathematical models. In the past, this strength came at significant computational expense, but in the new HMMER3 project, HMMER is now essentially as fast as BLAST.

http://hmmer.org/

"},{"location":"available_software/detail/HMMER/#available-modules","title":"Available modules","text":"

The overview below shows which HMMER installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HMMER, load one of these modules using a module load command like:

module load HMMER/3.4-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HMMER/3.4-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/HPL/","title":"HPL","text":"

HPL is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers. It can thus be regarded as a portable as well as freely available implementation of the High Performance Computing Linpack Benchmark.

https://www.netlib.org/benchmark/hpl/

"},{"location":"available_software/detail/HPL/#available-modules","title":"Available modules","text":"

The overview below shows which HPL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HPL, load one of these modules using a module load command like:

module load HPL/2.3-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HPL/2.3-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/HTSlib/","title":"HTSlib","text":"

A C library for reading/writing high-throughput sequencing data. This package includes the utilities bgzip and tabix

https://www.htslib.org/

"},{"location":"available_software/detail/HTSlib/#available-modules","title":"Available modules","text":"

The overview below shows which HTSlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HTSlib, load one of these modules using a module load command like:

module load HTSlib/1.19.1-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HTSlib/1.19.1-GCC-13.2.0 x x x x x x x x x HTSlib/1.18-GCC-12.3.0 x x x x x x x x x HTSlib/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HarfBuzz/","title":"HarfBuzz","text":"

HarfBuzz is an OpenType text shaping engine.

https://www.freedesktop.org/wiki/Software/HarfBuzz

"},{"location":"available_software/detail/HarfBuzz/#available-modules","title":"Available modules","text":"

The overview below shows which HarfBuzz installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HarfBuzz, load one of these modules using a module load command like:

module load HarfBuzz/8.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HarfBuzz/8.2.2-GCCcore-13.2.0 x x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.3.0 x x x x x x x x x HarfBuzz/5.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/HepMC3/","title":"HepMC3","text":"

HepMC is a standard for storing Monte Carlo event data.

http://hepmc.web.cern.ch/hepmc/

"},{"location":"available_software/detail/HepMC3/#available-modules","title":"Available modules","text":"

The overview below shows which HepMC3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using HepMC3, load one of these modules using a module load command like:

module load HepMC3/3.2.6-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 HepMC3/3.2.6-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Highway/","title":"Highway","text":"

Highway is a C++ library for SIMD (Single Instruction, Multiple Data), i.e. applying the sameoperation to 'lanes'.

https://github.com/google/highway

"},{"location":"available_software/detail/Highway/#available-modules","title":"Available modules","text":"

The overview below shows which Highway installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Highway, load one of these modules using a module load command like:

module load Highway/1.0.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Highway/1.0.4-GCCcore-12.3.0 x x x x x x x x x Highway/1.0.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Hypre/","title":"Hypre","text":"

Hypre is a library for solving large, sparse linear systems of equations on massively parallel computers. The problems of interest arise in the simulation codes being developed at LLNL and elsewhere to study physical phenomena in the defense, environmental, energy, and biological sciences.

https://computation.llnl.gov/projects/hypre-scalable-linear-solvers-multigrid-methods

"},{"location":"available_software/detail/Hypre/#available-modules","title":"Available modules","text":"

The overview below shows which Hypre installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Hypre, load one of these modules using a module load command like:

module load Hypre/2.29.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Hypre/2.29.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/ICU/","title":"ICU","text":"

ICU is a mature, widely used set of C/C++ and Java libraries providing Unicode and Globalization support for software applications.

https://icu.unicode.org

"},{"location":"available_software/detail/ICU/#available-modules","title":"Available modules","text":"

The overview below shows which ICU installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ICU, load one of these modules using a module load command like:

module load ICU/74.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ICU/74.1-GCCcore-13.2.0 x x x x x x x x x ICU/73.2-GCCcore-12.3.0 x x x x x x x x x ICU/72.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/IDG/","title":"IDG","text":"

Image Domain Gridding (IDG) is a fast method for convolutional resampling (gridding/degridding)of radio astronomical data (visibilities). Direction dependent effects (DDEs) or A-tems can be appliedin the gridding process.The algorithm is described in \"Image Domain Gridding: a fast method for convolutional resampling of visibilities\",Van der Tol (2018).The implementation is described in \"Radio-astronomical imaging on graphics processors\", Veenboer (2020).Please cite these papers in publications using IDG.

https://idg.readthedocs.io/

"},{"location":"available_software/detail/IDG/#available-modules","title":"Available modules","text":"

The overview below shows which IDG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IDG, load one of these modules using a module load command like:

module load IDG/1.2.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IDG/1.2.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/IPython/","title":"IPython","text":"

IPython provides a rich architecture for interactive computing with: Powerful interactive shells (terminal and Qt-based). A browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media. Support for interactive data visualization and use of GUI toolkits. Flexible, embeddable interpreters to load into your own projects. Easy to use, high performance tools for parallel computing.

https://ipython.org/index.html

"},{"location":"available_software/detail/IPython/#available-modules","title":"Available modules","text":"

The overview below shows which IPython installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IPython, load one of these modules using a module load command like:

module load IPython/8.17.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IPython/8.17.2-GCCcore-13.2.0 x x x x x x x x x IPython/8.14.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/IPython/#ipython8172-gcccore-1320","title":"IPython/8.17.2-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

asttokens-2.4.1, backcall-0.2.0, executing-2.0.1, ipython-8.17.2, matplotlib-inline-0.1.6, pickleshare-0.7.5, prompt_toolkit-3.0.41, pure_eval-0.2.2, stack_data-0.6.3, traitlets-5.13.0

"},{"location":"available_software/detail/IPython/#ipython8140-gcccore-1230","title":"IPython/8.14.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

asttokens-2.2.1, backcall-0.2.0, executing-1.2.0, ipython-8.14.0, jedi-0.19.0, matplotlib-inline-0.1.6, parso-0.8.3, pickleshare-0.7.5, prompt_toolkit-3.0.39, pure_eval-0.2.2, stack_data-0.6.2, traitlets-5.9.0

"},{"location":"available_software/detail/IQ-TREE/","title":"IQ-TREE","text":"

Efficient phylogenomic software by maximum likelihood

http://www.iqtree.org/

"},{"location":"available_software/detail/IQ-TREE/#available-modules","title":"Available modules","text":"

The overview below shows which IQ-TREE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using IQ-TREE, load one of these modules using a module load command like:

module load IQ-TREE/2.3.5-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 IQ-TREE/2.3.5-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/ISA-L/","title":"ISA-L","text":"

Intelligent Storage Acceleration Library

https://github.com/intel/isa-l

"},{"location":"available_software/detail/ISA-L/#available-modules","title":"Available modules","text":"

The overview below shows which ISA-L installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ISA-L, load one of these modules using a module load command like:

module load ISA-L/2.30.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ISA-L/2.30.0-GCCcore-12.3.0 x x x x x x x x x ISA-L/2.30.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ISL/","title":"ISL","text":"

isl is a library for manipulating sets and relations of integer points bounded by linear constraints.

https://libisl.sourceforge.io

"},{"location":"available_software/detail/ISL/#available-modules","title":"Available modules","text":"

The overview below shows which ISL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ISL, load one of these modules using a module load command like:

module load ISL/0.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ISL/0.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ITSTool/","title":"ITSTool","text":"

ITS Tool allows you to translate your XML documents with PO files

http://itstool.org/

"},{"location":"available_software/detail/ITSTool/#available-modules","title":"Available modules","text":"

The overview below shows which ITSTool installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ITSTool, load one of these modules using a module load command like:

module load ITSTool/2.0.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ITSTool/2.0.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ImageMagick/","title":"ImageMagick","text":"

ImageMagick is a software suite to create, edit, compose, or convert bitmap images

https://www.imagemagick.org/

"},{"location":"available_software/detail/ImageMagick/#available-modules","title":"Available modules","text":"

The overview below shows which ImageMagick installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ImageMagick, load one of these modules using a module load command like:

module load ImageMagick/7.1.1-34-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ImageMagick/7.1.1-34-GCCcore-13.2.0 x x x x x x x x x ImageMagick/7.1.1-15-GCCcore-12.3.0 x x x x x x x x x ImageMagick/7.1.0-53-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Imath/","title":"Imath","text":"

Imath is a C++ and python library of 2D and 3D vector, matrix, and math operations for computer graphics

https://imath.readthedocs.io/en/latest/

"},{"location":"available_software/detail/Imath/#available-modules","title":"Available modules","text":"

The overview below shows which Imath installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Imath, load one of these modules using a module load command like:

module load Imath/3.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Imath/3.1.9-GCCcore-13.2.0 x x x x x x x x x Imath/3.1.7-GCCcore-12.3.0 x x x x x x x x x Imath/3.1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/JasPer/","title":"JasPer","text":"

The JasPer Project is an open-source initiative to provide a free software-based reference implementation of the codec specified in the JPEG-2000 Part-1 standard.

https://www.ece.uvic.ca/~frodo/jasper/

"},{"location":"available_software/detail/JasPer/#available-modules","title":"Available modules","text":"

The overview below shows which JasPer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JasPer, load one of these modules using a module load command like:

module load JasPer/4.0.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JasPer/4.0.0-GCCcore-13.2.0 x x x x x x x x x JasPer/4.0.0-GCCcore-12.3.0 x x x x x x x x x JasPer/4.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Java/","title":"Java","text":""},{"location":"available_software/detail/Java/#available-modules","title":"Available modules","text":"

The overview below shows which Java installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Java, load one of these modules using a module load command like:

module load Java/17.0.6\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Java/17.0.6 x x x x x x x x x Java/17(@Java/17.0.6) x x x x x x x x x Java/11.0.20 x x x x x x x x x Java/11(@Java/11.0.20) x x x x x x x x x"},{"location":"available_software/detail/JsonCpp/","title":"JsonCpp","text":"

JsonCpp is a C++ library that allows manipulating JSON values, including serialization and deserialization to and from strings. It can also preserve existing comment in unserialization/serialization steps, making it a convenient format to store user input files.

https://open-source-parsers.github.io/jsoncpp-docs/doxygen/index.html

"},{"location":"available_software/detail/JsonCpp/#available-modules","title":"Available modules","text":"

The overview below shows which JsonCpp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JsonCpp, load one of these modules using a module load command like:

module load JsonCpp/1.9.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JsonCpp/1.9.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Judy/","title":"Judy","text":"

A C library that implements a dynamic array.

http://judy.sourceforge.net/

"},{"location":"available_software/detail/Judy/#available-modules","title":"Available modules","text":"

The overview below shows which Judy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Judy, load one of these modules using a module load command like:

module load Judy/1.0.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Judy/1.0.5-GCCcore-12.3.0 x x x x x x x x x Judy/1.0.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/JupyterLab/","title":"JupyterLab","text":"

JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook.

https://jupyter.org/

"},{"location":"available_software/detail/JupyterLab/#available-modules","title":"Available modules","text":"

The overview below shows which JupyterLab installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JupyterLab, load one of these modules using a module load command like:

module load JupyterLab/4.0.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterLab/4.0.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/JupyterLab/#jupyterlab405-gcccore-1230","title":"JupyterLab/4.0.5-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

async-lru-2.0.4, json5-0.9.14, jupyter-lsp-2.2.0, jupyterlab-4.0.5, jupyterlab_server-2.24.0

"},{"location":"available_software/detail/JupyterNotebook/","title":"JupyterNotebook","text":"

The Jupyter Notebook is the original web application for creating and sharing computational documents. It offers a simple, streamlined, document-centric experience.

https://jupyter.org/

"},{"location":"available_software/detail/JupyterNotebook/#available-modules","title":"Available modules","text":"

The overview below shows which JupyterNotebook installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using JupyterNotebook, load one of these modules using a module load command like:

module load JupyterNotebook/7.0.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 JupyterNotebook/7.0.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/KaHIP/","title":"KaHIP","text":"

The graph partitioning framework KaHIP -- Karlsruhe High Quality Partitioning.

https://kahip.github.io/

"},{"location":"available_software/detail/KaHIP/#available-modules","title":"Available modules","text":"

The overview below shows which KaHIP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using KaHIP, load one of these modules using a module load command like:

module load KaHIP/3.16-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 KaHIP/3.16-gompi-2023a x x x x x x x x x KaHIP/3.14-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/KronaTools/","title":"KronaTools","text":"

Krona Tools is a set of scripts to create Krona charts from several Bioinformatics tools as well as from text and XML files.

https://github.com/marbl/Krona/wiki/KronaTools

"},{"location":"available_software/detail/KronaTools/#available-modules","title":"Available modules","text":"

The overview below shows which KronaTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using KronaTools, load one of these modules using a module load command like:

module load KronaTools/2.8.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 KronaTools/2.8.1-GCCcore-12.3.0 x x x x x x x x x KronaTools/2.8.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LAME/","title":"LAME","text":"

LAME is a high quality MPEG Audio Layer III (MP3) encoder licensed under the LGPL.

http://lame.sourceforge.net/

"},{"location":"available_software/detail/LAME/#available-modules","title":"Available modules","text":"

The overview below shows which LAME installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LAME, load one of these modules using a module load command like:

module load LAME/3.100-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAME/3.100-GCCcore-13.2.0 x x x x x x x x x LAME/3.100-GCCcore-12.3.0 x x x x x x x x x LAME/3.100-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LAMMPS/","title":"LAMMPS","text":"

LAMMPS is a classical molecular dynamics code, and an acronymfor Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS haspotentials for solid-state materials (metals, semiconductors) and soft matter(biomolecules, polymers) and coarse-grained or mesoscopic systems. It can beused to model atoms or, more generically, as a parallel particle simulator atthe atomic, meso, or continuum scale. LAMMPS runs on single processors or inparallel using message-passing techniques and a spatial-decomposition of thesimulation domain. The code is designed to be easy to modify or extend with newfunctionality.

https://www.lammps.org

"},{"location":"available_software/detail/LAMMPS/#available-modules","title":"Available modules","text":"

The overview below shows which LAMMPS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LAMMPS, load one of these modules using a module load command like:

module load LAMMPS/29Aug2024-foss-2023b-kokkos\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LAMMPS/29Aug2024-foss-2023b-kokkos x x x x x x x x x LAMMPS/2Aug2023_update2-foss-2023a-kokkos x x x x x x x x x"},{"location":"available_software/detail/LERC/","title":"LERC","text":"

LERC is an open-source image or raster format which supports rapid encoding and decodingfor any pixel type (not just RGB or Byte). Users set the maximum compression error per pixel while encoding,so the precision of the original input image is preserved (within user defined error bounds).

https://github.com/Esri/lerc

"},{"location":"available_software/detail/LERC/#available-modules","title":"Available modules","text":"

The overview below shows which LERC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LERC, load one of these modules using a module load command like:

module load LERC/4.0.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LERC/4.0.0-GCCcore-13.2.0 x x x x x x x x x LERC/4.0.0-GCCcore-12.3.0 x x x x x x x x x LERC/4.0.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LHAPDF/","title":"LHAPDF","text":"

Les Houches Parton Density FunctionLHAPDF is the standard tool for evaluating parton distribution functions (PDFs) in high-energy physics.

http://lhapdf.hepforge.org/

"},{"location":"available_software/detail/LHAPDF/#available-modules","title":"Available modules","text":"

The overview below shows which LHAPDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LHAPDF, load one of these modules using a module load command like:

module load LHAPDF/6.5.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LHAPDF/6.5.4-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LLVM/","title":"LLVM","text":"

The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation (\"LLVM IR\"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.

https://llvm.org/

"},{"location":"available_software/detail/LLVM/#available-modules","title":"Available modules","text":"

The overview below shows which LLVM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LLVM, load one of these modules using a module load command like:

module load LLVM/16.0.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LLVM/16.0.6-GCCcore-13.2.0 x x x x x x x x x LLVM/16.0.6-GCCcore-12.3.0 x x x x x x x x x LLVM/15.0.5-GCCcore-12.2.0 x x x x x x - x x LLVM/14.0.6-GCCcore-12.3.0-llvmlite x x x x x x x x x"},{"location":"available_software/detail/LMDB/","title":"LMDB","text":"

LMDB is a fast, memory-efficient database. With memory-mapped files, it has the read performance of a pure in-memory database while retaining the persistence of standard disk-based databases.

https://symas.com/lmdb

"},{"location":"available_software/detail/LMDB/#available-modules","title":"Available modules","text":"

The overview below shows which LMDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LMDB, load one of these modules using a module load command like:

module load LMDB/0.9.31-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LMDB/0.9.31-GCCcore-12.3.0 x x x x x x x x x LMDB/0.9.29-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LRBinner/","title":"LRBinner","text":"

LRBinner is a long-read binning tool published in WABI 2021 proceedings and AMB.

https://github.com/anuradhawick/LRBinner

"},{"location":"available_software/detail/LRBinner/#available-modules","title":"Available modules","text":"

The overview below shows which LRBinner installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LRBinner, load one of these modules using a module load command like:

module load LRBinner/0.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LRBinner/0.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/LRBinner/#lrbinner01-foss-2023a","title":"LRBinner/0.1-foss-2023a","text":"

This is a list of extensions included in the module:

LRBinner-0.1, tabulate-0.9.0

"},{"location":"available_software/detail/LSD2/","title":"LSD2","text":"

Least-squares methods to estimate rates and dates from phylogenies

https://github.com/tothuhien/lsd2

"},{"location":"available_software/detail/LSD2/#available-modules","title":"Available modules","text":"

The overview below shows which LSD2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LSD2, load one of these modules using a module load command like:

module load LSD2/2.4.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LSD2/2.4.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LZO/","title":"LZO","text":"

Portable lossless data compression library

https://www.oberhumer.com/opensource/lzo/

"},{"location":"available_software/detail/LZO/#available-modules","title":"Available modules","text":"

The overview below shows which LZO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LZO, load one of these modules using a module load command like:

module load LZO/2.10-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LZO/2.10-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/LibTIFF/","title":"LibTIFF","text":"

tiff: Library and tools for reading and writing TIFF data files

https://libtiff.gitlab.io/libtiff/

"},{"location":"available_software/detail/LibTIFF/#available-modules","title":"Available modules","text":"

The overview below shows which LibTIFF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LibTIFF, load one of these modules using a module load command like:

module load LibTIFF/4.6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LibTIFF/4.6.0-GCCcore-13.2.0 x x x x x x x x x LibTIFF/4.5.0-GCCcore-12.3.0 x x x x x x x x x LibTIFF/4.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Libint/","title":"Libint","text":"

Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory.

https://github.com/evaleev/libint

"},{"location":"available_software/detail/Libint/#available-modules","title":"Available modules","text":"

The overview below shows which Libint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Libint, load one of these modules using a module load command like:

module load Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Libint/2.7.2-GCC-12.3.0-lmax-6-cp2k x x x x x x x x x"},{"location":"available_software/detail/LightGBM/","title":"LightGBM","text":"

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBMor MART) framework based on decision tree algorithms, used for ranking,classification and many other machine learning tasks.

https://lightgbm.readthedocs.io

"},{"location":"available_software/detail/LightGBM/#available-modules","title":"Available modules","text":"

The overview below shows which LightGBM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LightGBM, load one of these modules using a module load command like:

module load LightGBM/4.5.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LightGBM/4.5.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/LightGBM/#lightgbm450-foss-2023a","title":"LightGBM/4.5.0-foss-2023a","text":"

This is a list of extensions included in the module:

lightgbm-4.5.0

"},{"location":"available_software/detail/LittleCMS/","title":"LittleCMS","text":"

Little CMS intends to be an OPEN SOURCE small-footprint color management engine, with special focus on accuracy and performance.

https://www.littlecms.com/

"},{"location":"available_software/detail/LittleCMS/#available-modules","title":"Available modules","text":"

The overview below shows which LittleCMS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LittleCMS, load one of these modules using a module load command like:

module load LittleCMS/2.15-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LittleCMS/2.15-GCCcore-13.2.0 x x x x x x x x x LittleCMS/2.15-GCCcore-12.3.0 x x x x x x x x x LittleCMS/2.14-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/LoopTools/","title":"LoopTools","text":"

LoopTools is a package for evaluation of scalar and tensor one-loop integrals.It is based on the FF package by G.J. van Oldenborgh.

https://feynarts.de/looptools/

"},{"location":"available_software/detail/LoopTools/#available-modules","title":"Available modules","text":"

The overview below shows which LoopTools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using LoopTools, load one of these modules using a module load command like:

module load LoopTools/2.15-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 LoopTools/2.15-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Lua/","title":"Lua","text":"

Lua is a powerful, fast, lightweight, embeddable scripting language. Lua combines simple procedural syntax with powerful data description constructs based on associative arrays and extensible semantics. Lua is dynamically typed, runs by interpreting bytecode for a register-based virtual machine, and has automatic memory management with incremental garbage collection, making it ideal for configuration, scripting, and rapid prototyping.

https://www.lua.org/

"},{"location":"available_software/detail/Lua/#available-modules","title":"Available modules","text":"

The overview below shows which Lua installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Lua, load one of these modules using a module load command like:

module load Lua/5.4.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Lua/5.4.6-GCCcore-13.2.0 x x x x x x x x x Lua/5.4.6-GCCcore-12.3.0 x x x x x x x x x Lua/5.4.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MAFFT/","title":"MAFFT","text":"

MAFFT is a multiple sequence alignment program for unix-like operating systems.It offers a range of multiple alignment methods, L-INS-i (accurate; for alignmentof <\u223c200 sequences), FFT-NS-2 (fast; for alignment of <\u223c30,000 sequences), etc.

https://mafft.cbrc.jp/alignment/software/source.html

"},{"location":"available_software/detail/MAFFT/#available-modules","title":"Available modules","text":"

The overview below shows which MAFFT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MAFFT, load one of these modules using a module load command like:

module load MAFFT/7.520-GCC-12.3.0-with-extensions\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MAFFT/7.520-GCC-12.3.0-with-extensions x x x x x x x x x MAFFT/7.505-GCC-12.2.0-with-extensions x x x x x x - x x"},{"location":"available_software/detail/MBX/","title":"MBX","text":"

MBX is an energy and force calculator for data-driven many-body simulations

https://github.com/paesanilab/MBX

"},{"location":"available_software/detail/MBX/#available-modules","title":"Available modules","text":"

The overview below shows which MBX installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MBX, load one of these modules using a module load command like:

module load MBX/1.1.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MBX/1.1.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MCL/","title":"MCL","text":"

The MCL algorithm is short for the Markov Cluster Algorithm, a fastand scalable unsupervised cluster algorithm for graphs (also known as networks) basedon simulation of (stochastic) flow in graphs.

https://micans.org/mcl/

"},{"location":"available_software/detail/MCL/#available-modules","title":"Available modules","text":"

The overview below shows which MCL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MCL, load one of these modules using a module load command like:

module load MCL/22.282-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MCL/22.282-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/MDAnalysis/","title":"MDAnalysis","text":"

MDAnalysis is an object-oriented Python library to analyze trajectories from molecular dynamics (MD)simulations in many popular formats.

https://www.mdanalysis.org/

"},{"location":"available_software/detail/MDAnalysis/#available-modules","title":"Available modules","text":"

The overview below shows which MDAnalysis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MDAnalysis, load one of these modules using a module load command like:

module load MDAnalysis/2.4.2-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDAnalysis/2.4.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/MDAnalysis/#mdanalysis242-foss-2022b","title":"MDAnalysis/2.4.2-foss-2022b","text":"

This is a list of extensions included in the module:

fasteners-0.18, funcsigs-1.0.2, GridDataFormats-1.0.1, gsd-2.8.0, MDAnalysis-2.4.2, mmtf-python-1.1.3, mrcfile-1.4.3, msgpack-1.0.5

"},{"location":"available_software/detail/MDI/","title":"MDI","text":"

The MolSSI Driver Interface (MDI) project provides a standardized API for fast, on-the-fly communication between computational chemistry codes. This greatly simplifies the process of implementing methods that require the cooperation of multiple software packages and enables developers to write a single implementation that works across many different codes. The API is sufficiently general to support a wide variety of techniques, including QM/MM, ab initio MD, machine learning, advanced sampling, and path integral MD, while also being straightforwardly extensible. Communication between codes is handled by the MDI Library, which enables tight coupling between codes using either the MPI or TCP/IP methods.

https://github.com/MolSSI-MDI/MDI_Library

"},{"location":"available_software/detail/MDI/#available-modules","title":"Available modules","text":"

The overview below shows which MDI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MDI, load one of these modules using a module load command like:

module load MDI/1.4.29-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MDI/1.4.29-gompi-2023b x x x x x x x x x MDI/1.4.26-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/METIS/","title":"METIS","text":"

METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.

http://glaros.dtc.umn.edu/gkhome/metis/metis/overview

"},{"location":"available_software/detail/METIS/#available-modules","title":"Available modules","text":"

The overview below shows which METIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using METIS, load one of these modules using a module load command like:

module load METIS/5.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 METIS/5.1.0-GCCcore-12.3.0 x x x x x x x x x METIS/5.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MMseqs2/","title":"MMseqs2","text":"

MMseqs2: ultra fast and sensitive search and clustering suite

https://mmseqs.com

"},{"location":"available_software/detail/MMseqs2/#available-modules","title":"Available modules","text":"

The overview below shows which MMseqs2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MMseqs2, load one of these modules using a module load command like:

module load MMseqs2/14-7e284-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MMseqs2/14-7e284-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/MODFLOW/","title":"MODFLOW","text":"

MODFLOW is the USGS's modular hydrologic model. MODFLOW is considered an international standard for simulating and predicting groundwater conditions and groundwater/surface-water interactions.

https://www.usgs.gov/mission-areas/water-resources/science/modflow-and-related-programs

"},{"location":"available_software/detail/MODFLOW/#available-modules","title":"Available modules","text":"

The overview below shows which MODFLOW installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MODFLOW, load one of these modules using a module load command like:

module load MODFLOW/6.4.4-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MODFLOW/6.4.4-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MPC/","title":"MPC","text":"

Gnu Mpc is a C library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result. It extends the principles of the IEEE-754 standard for fixed precision real floating point numbers to complex numbers, providing well-defined semantics for every operation. At the same time, speed of operation at high precision is a major design goal.

http://www.multiprecision.org/

"},{"location":"available_software/detail/MPC/#available-modules","title":"Available modules","text":"

The overview below shows which MPC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MPC, load one of these modules using a module load command like:

module load MPC/1.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPC/1.3.1-GCCcore-13.2.0 x x x x x x x x x MPC/1.3.1-GCCcore-12.3.0 x x x x x x x x x MPC/1.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MPFR/","title":"MPFR","text":"

The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.

https://www.mpfr.org

"},{"location":"available_software/detail/MPFR/#available-modules","title":"Available modules","text":"

The overview below shows which MPFR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MPFR, load one of these modules using a module load command like:

module load MPFR/4.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MPFR/4.2.1-GCCcore-13.2.0 x x x x x x x x x MPFR/4.2.0-GCCcore-12.3.0 x x x x x x x x x MPFR/4.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MUMPS/","title":"MUMPS","text":"

A parallel sparse direct solver

https://graal.ens-lyon.fr/MUMPS/

"},{"location":"available_software/detail/MUMPS/#available-modules","title":"Available modules","text":"

The overview below shows which MUMPS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MUMPS, load one of these modules using a module load command like:

module load MUMPS/5.6.1-foss-2023a-metis\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MUMPS/5.6.1-foss-2023a-metis x x x x x x x x x MUMPS/5.6.1-foss-2022b-metis x x x x x x - x x"},{"location":"available_software/detail/Mako/","title":"Mako","text":"

A super-fast templating language that borrows the best ideas from the existing templating languages

https://www.makotemplates.org

"},{"location":"available_software/detail/Mako/#available-modules","title":"Available modules","text":"

The overview below shows which Mako installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mako, load one of these modules using a module load command like:

module load Mako/1.2.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mako/1.2.4-GCCcore-13.2.0 x x x x x x x x x Mako/1.2.4-GCCcore-12.3.0 x x x x x x x x x Mako/1.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Mako/#mako124-gcccore-1320","title":"Mako/1.2.4-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

Mako-1.2.4, MarkupSafe-2.1.3

"},{"location":"available_software/detail/Mako/#mako124-gcccore-1230","title":"Mako/1.2.4-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Mako-1.2.4, MarkupSafe-2.1.3

"},{"location":"available_software/detail/MariaDB/","title":"MariaDB","text":"

MariaDB is an enhanced, drop-in replacement for MySQL.Included engines: myISAM, Aria, InnoDB, RocksDB, TokuDB, OQGraph, Mroonga.

https://mariadb.org/

"},{"location":"available_software/detail/MariaDB/#available-modules","title":"Available modules","text":"

The overview below shows which MariaDB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MariaDB, load one of these modules using a module load command like:

module load MariaDB/11.6.0-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MariaDB/11.6.0-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Mash/","title":"Mash","text":"

Fast genome and metagenome distance estimation using MinHash

http://mash.readthedocs.org

"},{"location":"available_software/detail/Mash/#available-modules","title":"Available modules","text":"

The overview below shows which Mash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mash, load one of these modules using a module load command like:

module load Mash/2.3-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mash/2.3-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Mesa/","title":"Mesa","text":"

Mesa is an open-source implementation of the OpenGL specification - a system for rendering interactive 3D graphics.

https://www.mesa3d.org/

"},{"location":"available_software/detail/Mesa/#available-modules","title":"Available modules","text":"

The overview below shows which Mesa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mesa, load one of these modules using a module load command like:

module load Mesa/23.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mesa/23.1.9-GCCcore-13.2.0 x x x x x x x x x Mesa/23.1.4-GCCcore-12.3.0 x x x x x x x x x Mesa/22.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Meson/","title":"Meson","text":"

Meson is a cross-platform build system designed to be both as fast and as user friendly as possible.

https://mesonbuild.com

"},{"location":"available_software/detail/Meson/#available-modules","title":"Available modules","text":"

The overview below shows which Meson installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Meson, load one of these modules using a module load command like:

module load Meson/1.3.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Meson/1.3.1-GCCcore-12.3.0 x x x x x x x x x Meson/1.2.3-GCCcore-13.2.0 x x x x x x x x x Meson/1.1.1-GCCcore-12.3.0 x x x x x x x x x Meson/0.64.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MetaEuk/","title":"MetaEuk","text":"

MetaEuk is a modular toolkit designed for large-scale gene discovery and annotation in eukaryotic metagenomic contigs.

https://metaeuk.soedinglab.org

"},{"location":"available_software/detail/MetaEuk/#available-modules","title":"Available modules","text":"

The overview below shows which MetaEuk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MetaEuk, load one of these modules using a module load command like:

module load MetaEuk/6-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MetaEuk/6-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/MetalWalls/","title":"MetalWalls","text":"

MetalWalls (MW) is a molecular dynamics code dedicated to the modelling of electrochemical systems.Its main originality is the inclusion of a series of methods allowing to apply a constant potential within theelectrode materials.

https://gitlab.com/ampere2/metalwalls

"},{"location":"available_software/detail/MetalWalls/#available-modules","title":"Available modules","text":"

The overview below shows which MetalWalls installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MetalWalls, load one of these modules using a module load command like:

module load MetalWalls/21.06.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MetalWalls/21.06.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/MultiQC/","title":"MultiQC","text":"

Aggregate results from bioinformatics analyses across many samples into a single report. MultiQC searches a given directory for analysis logs and compiles an HTML report. It's a general use tool, perfect for summarising the output from numerous bioinformatics tools.

https://multiqc.info

"},{"location":"available_software/detail/MultiQC/#available-modules","title":"Available modules","text":"

The overview below shows which MultiQC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using MultiQC, load one of these modules using a module load command like:

module load MultiQC/1.14-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 MultiQC/1.14-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/MultiQC/#multiqc114-foss-2022b","title":"MultiQC/1.14-foss-2022b","text":"

This is a list of extensions included in the module:

coloredlogs-15.0.1, colormath-3.0.0, commonmark-0.9.1, humanfriendly-10.0, lzstring-1.0.4, Markdown-3.4.1, markdown-it-py-2.1.0, mdurl-0.1.2, multiqc-1.14, Pygments-2.14.0, rich-13.3.1, rich-click-1.6.1, spectra-0.0.11

"},{"location":"available_software/detail/Mustache/","title":"Mustache","text":"

Mustache (Multi-scale Detection of Chromatin Loops from Hi-C and Micro-C Maps usingScale-Space Representation) is a tool for multi-scale detection of chromatin loops from Hi-C and Micro-Ccontact maps in high resolutions (10kbp all the way to 500bp and even more).Mustache uses recent technical advances in scale-space theory inComputer Vision to detect chromatin loops caused by interaction of DNA segments with a variable size.

https://github.com/ay-lab/mustache

"},{"location":"available_software/detail/Mustache/#available-modules","title":"Available modules","text":"

The overview below shows which Mustache installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Mustache, load one of these modules using a module load command like:

module load Mustache/1.3.3-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Mustache/1.3.3-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/NASM/","title":"NASM","text":"

NASM: General-purpose x86 assembler

https://www.nasm.us/

"},{"location":"available_software/detail/NASM/#available-modules","title":"Available modules","text":"

The overview below shows which NASM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NASM, load one of these modules using a module load command like:

module load NASM/2.16.01-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NASM/2.16.01-GCCcore-13.2.0 x x x x x x x x x NASM/2.16.01-GCCcore-12.3.0 x x x x x x x x x NASM/2.15.05-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NCCL/","title":"NCCL","text":"

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collectivecommunication primitives that are performance optimized for NVIDIA GPUs.

https://developer.nvidia.com/nccl

"},{"location":"available_software/detail/NCCL/#available-modules","title":"Available modules","text":"

The overview below shows which NCCL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NCCL, load one of these modules using a module load command like:

module load NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NCCL/2.18.3-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/NLTK/","title":"NLTK","text":"

NLTK is a leading platform for building Python programs to work with human language data.

https://www.nltk.org/

"},{"location":"available_software/detail/NLTK/#available-modules","title":"Available modules","text":"

The overview below shows which NLTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NLTK, load one of these modules using a module load command like:

module load NLTK/3.8.1-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NLTK/3.8.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/NLTK/#nltk381-foss-2023b","title":"NLTK/3.8.1-foss-2023b","text":"

This is a list of extensions included in the module:

NLTK-3.8.1, python-crfsuite-0.9.10, regex-2023.12.25

"},{"location":"available_software/detail/NLopt/","title":"NLopt","text":"

NLopt is a free/open-source library for nonlinear optimization, providing a common interface for a number of different free optimization routines available online as well as original implementations of various other algorithms.

http://ab-initio.mit.edu/wiki/index.php/NLopt

"},{"location":"available_software/detail/NLopt/#available-modules","title":"Available modules","text":"

The overview below shows which NLopt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NLopt, load one of these modules using a module load command like:

module load NLopt/2.7.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NLopt/2.7.1-GCCcore-13.2.0 x x x x x x x x x NLopt/2.7.1-GCCcore-12.3.0 x x x x x x x x x NLopt/2.7.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NSPR/","title":"NSPR","text":"

Netscape Portable Runtime (NSPR) provides a platform-neutral API for system level and libc-like functions.

https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSPR

"},{"location":"available_software/detail/NSPR/#available-modules","title":"Available modules","text":"

The overview below shows which NSPR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NSPR, load one of these modules using a module load command like:

module load NSPR/4.35-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSPR/4.35-GCCcore-13.2.0 x x x x x x x x x NSPR/4.35-GCCcore-12.3.0 x x x x x x x x x NSPR/4.35-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/NSS/","title":"NSS","text":"

Network Security Services (NSS) is a set of libraries designed to support cross-platform development of security-enabled client and server applications.

https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS

"},{"location":"available_software/detail/NSS/#available-modules","title":"Available modules","text":"

The overview below shows which NSS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using NSS, load one of these modules using a module load command like:

module load NSS/3.94-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 NSS/3.94-GCCcore-13.2.0 x x x x x x x x x NSS/3.89.1-GCCcore-12.3.0 x x x x x x x x x NSS/3.85-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Nextflow/","title":"Nextflow","text":"

Nextflow is a reactive workflow framework and a programming DSL that eases writing computational pipelines with complex data

https://www.nextflow.io/

"},{"location":"available_software/detail/Nextflow/#available-modules","title":"Available modules","text":"

The overview below shows which Nextflow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Nextflow, load one of these modules using a module load command like:

module load Nextflow/23.10.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Nextflow/23.10.0 x x x x x x x x x"},{"location":"available_software/detail/Ninja/","title":"Ninja","text":"

Ninja is a small build system with a focus on speed.

https://ninja-build.org/

"},{"location":"available_software/detail/Ninja/#available-modules","title":"Available modules","text":"

The overview below shows which Ninja installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ninja, load one of these modules using a module load command like:

module load Ninja/1.11.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ninja/1.11.1-GCCcore-13.2.0 x x x x x x x x x Ninja/1.11.1-GCCcore-12.3.0 x x x x x x x x x Ninja/1.11.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OPARI2/","title":"OPARI2","text":"

OPARI2, the successor of Forschungszentrum Juelich's OPARI, is a source-to-source instrumentation tool for OpenMP and hybrid codes. It surrounds OpenMP directives and runtime library calls with calls to the POMP2 measurement interface.

https://www.score-p.org

"},{"location":"available_software/detail/OPARI2/#available-modules","title":"Available modules","text":"

The overview below shows which OPARI2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OPARI2, load one of these modules using a module load command like:

module load OPARI2/2.0.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OPARI2/2.0.8-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/OSU-Micro-Benchmarks/","title":"OSU-Micro-Benchmarks","text":"

OSU Micro-Benchmarks

https://mvapich.cse.ohio-state.edu/benchmarks/

"},{"location":"available_software/detail/OSU-Micro-Benchmarks/#available-modules","title":"Available modules","text":"

The overview below shows which OSU-Micro-Benchmarks installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OSU-Micro-Benchmarks, load one of these modules using a module load command like:

module load OSU-Micro-Benchmarks/7.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OSU-Micro-Benchmarks/7.2-gompi-2023b x x x x x x x x x OSU-Micro-Benchmarks/7.2-gompi-2023a-CUDA-12.1.1 x x x x x x - x x OSU-Micro-Benchmarks/7.1-1-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/OTF2/","title":"OTF2","text":"

The Open Trace Format 2 is a highly scalable, memory efficient event trace data format plus support library. It is the new standard trace format for Scalasca, Vampir, and TAU and is open for other tools.

https://www.score-p.org

"},{"location":"available_software/detail/OTF2/#available-modules","title":"Available modules","text":"

The overview below shows which OTF2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OTF2, load one of these modules using a module load command like:

module load OTF2/3.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OTF2/3.0.3-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/OpenBLAS/","title":"OpenBLAS","text":"

OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.

http://www.openblas.net/

"},{"location":"available_software/detail/OpenBLAS/#available-modules","title":"Available modules","text":"

The overview below shows which OpenBLAS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenBLAS, load one of these modules using a module load command like:

module load OpenBLAS/0.3.24-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenBLAS/0.3.24-GCC-13.2.0 x x x x x x x x x OpenBLAS/0.3.23-GCC-12.3.0 x x x x x x x x x OpenBLAS/0.3.21-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenEXR/","title":"OpenEXR","text":"

OpenEXR is a high dynamic-range (HDR) image file format developed by Industrial Light & Magic for use in computer imaging applications

https://www.openexr.com/

"},{"location":"available_software/detail/OpenEXR/#available-modules","title":"Available modules","text":"

The overview below shows which OpenEXR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenEXR, load one of these modules using a module load command like:

module load OpenEXR/3.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenEXR/3.2.0-GCCcore-13.2.0 x x x x x x x x x OpenEXR/3.1.7-GCCcore-12.3.0 x x x x x x x x x OpenEXR/3.1.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenFOAM/","title":"OpenFOAM","text":"

OpenFOAM is a free, open source CFD software package. OpenFOAM has an extensive range of features to solve anything from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics.

https://www.openfoam.org/

"},{"location":"available_software/detail/OpenFOAM/#available-modules","title":"Available modules","text":"

The overview below shows which OpenFOAM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenFOAM, load one of these modules using a module load command like:

module load OpenFOAM/v2406-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenFOAM/v2406-foss-2023a x x x x x x x x x OpenFOAM/v2312-foss-2023a x x x x x x x x x OpenFOAM/11-foss-2023a x x x x x x x x x OpenFOAM/10-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/OpenJPEG/","title":"OpenJPEG","text":"

OpenJPEG is an open-source JPEG 2000 codec written in C language. It has been developed in order to promote the use of JPEG 2000, a still-image compression standard from the Joint Photographic Experts Group (JPEG). Since may 2015, it is officially recognized by ISO/IEC and ITU-T as a JPEG 2000 Reference Software.

https://www.openjpeg.org/

"},{"location":"available_software/detail/OpenJPEG/#available-modules","title":"Available modules","text":"

The overview below shows which OpenJPEG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenJPEG, load one of these modules using a module load command like:

module load OpenJPEG/2.5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenJPEG/2.5.0-GCCcore-13.2.0 x x x x x x x x x OpenJPEG/2.5.0-GCCcore-12.3.0 x x x x x x x x x OpenJPEG/2.5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenMPI/","title":"OpenMPI","text":"

The Open MPI Project is an open source MPI-3 implementation.

https://www.open-mpi.org/

"},{"location":"available_software/detail/OpenMPI/#available-modules","title":"Available modules","text":"

The overview below shows which OpenMPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenMPI, load one of these modules using a module load command like:

module load OpenMPI/4.1.6-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenMPI/4.1.6-GCC-13.2.0 x x x x x x x x x OpenMPI/4.1.5-GCC-12.3.0 x x x x x x x x x OpenMPI/4.1.4-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/OpenPGM/","title":"OpenPGM","text":"

OpenPGM is an open source implementation of the Pragmatic General Multicast (PGM) specification in RFC 3208 available at www.ietf.org. PGM is a reliable and scalable multicast protocol that enables receivers to detect loss, request retransmission of lost data, or notify an application of unrecoverable loss. PGM is a receiver-reliable protocol, which means the receiver is responsible for ensuring all data is received, absolving the sender of reception responsibility.

https://code.google.com/p/openpgm/

"},{"location":"available_software/detail/OpenPGM/#available-modules","title":"Available modules","text":"

The overview below shows which OpenPGM installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenPGM, load one of these modules using a module load command like:

module load OpenPGM/5.2.122-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenPGM/5.2.122-GCCcore-13.2.0 x x x x x x x x x OpenPGM/5.2.122-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/OpenSSL/","title":"OpenSSL","text":"

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolchain implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library.

https://www.openssl.org/

"},{"location":"available_software/detail/OpenSSL/#available-modules","title":"Available modules","text":"

The overview below shows which OpenSSL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OpenSSL, load one of these modules using a module load command like:

module load OpenSSL/1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OpenSSL/1.1 x x x x x x x x x"},{"location":"available_software/detail/OrthoFinder/","title":"OrthoFinder","text":"

OrthoFinder is a fast, accurate and comprehensive platform for comparative genomics

https://github.com/davidemms/OrthoFinder

"},{"location":"available_software/detail/OrthoFinder/#available-modules","title":"Available modules","text":"

The overview below shows which OrthoFinder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using OrthoFinder, load one of these modules using a module load command like:

module load OrthoFinder/2.5.5-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 OrthoFinder/2.5.5-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Osi/","title":"Osi","text":"

Osi (Open Solver Interface) provides an abstract base class to a generic linearprogramming (LP) solver, along with derived classes for specific solvers. Manyapplications may be able to use the Osi to insulate themselves from a specificLP solver. That is, programs written to the OSI standard may be linked to anysolver with an OSI interface and should produce correct results. The OSI hasbeen significantly extended compared to its first incarnation. Currently, theOSI supports linear programming solvers and has rudimentary support for integerprogramming.

https://github.com/coin-or/Osi

"},{"location":"available_software/detail/Osi/#available-modules","title":"Available modules","text":"

The overview below shows which Osi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Osi, load one of these modules using a module load command like:

module load Osi/0.108.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Osi/0.108.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PAPI/","title":"PAPI","text":"

PAPI provides the tool designer and application engineer with a consistent interface and methodology for use of the performance counter hardware found in most major microprocessors. PAPI enables software engineers to see, in near real time, the relation between software performance and processor events. In addition Component PAPI provides access to a collection of components that expose performance measurement opportunites across the hardware and software stack.

https://icl.cs.utk.edu/projects/papi/

"},{"location":"available_software/detail/PAPI/#available-modules","title":"Available modules","text":"

The overview below shows which PAPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PAPI, load one of these modules using a module load command like:

module load PAPI/7.1.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PAPI/7.1.0-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PCRE/","title":"PCRE","text":"

The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.

https://www.pcre.org/

"},{"location":"available_software/detail/PCRE/#available-modules","title":"Available modules","text":"

The overview below shows which PCRE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PCRE, load one of these modules using a module load command like:

module load PCRE/8.45-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE/8.45-GCCcore-13.2.0 x x x x x x x x x PCRE/8.45-GCCcore-12.3.0 x x x x x x x x x PCRE/8.45-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PCRE2/","title":"PCRE2","text":"

The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.

https://www.pcre.org/

"},{"location":"available_software/detail/PCRE2/#available-modules","title":"Available modules","text":"

The overview below shows which PCRE2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PCRE2, load one of these modules using a module load command like:

module load PCRE2/10.42-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PCRE2/10.42-GCCcore-13.2.0 x x x x x x x x x PCRE2/10.42-GCCcore-12.3.0 x x x x x x x x x PCRE2/10.40-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PDT/","title":"PDT","text":"

Program Database Toolkit (PDT) is a framework for analyzing source code written in several programming languages and for making rich program knowledge accessible to developers of static and dynamic analysis tools. PDT implements a standard program representation, the program database (PDB), that can be accessed in a uniform way through a class library supporting common PDB operations.

https://www.cs.uoregon.edu/research/pdt/

"},{"location":"available_software/detail/PDT/#available-modules","title":"Available modules","text":"

The overview below shows which PDT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PDT, load one of these modules using a module load command like:

module load PDT/3.25.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PDT/3.25.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PETSc/","title":"PETSc","text":"

PETSc, pronounced PET-see (the S is silent), is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.

https://www.mcs.anl.gov/petsc

"},{"location":"available_software/detail/PETSc/#available-modules","title":"Available modules","text":"

The overview below shows which PETSc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PETSc, load one of these modules using a module load command like:

module load PETSc/3.20.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PETSc/3.20.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PGPLOT/","title":"PGPLOT","text":"

The PGPLOT Graphics Subroutine Library is a Fortran- or C-callable,device-independent graphics package for making simple scientific graphs. It is intendedfor making graphical images of publication quality with minimum effort on the part ofthe user. For most applications, the program can be device-independent, and the outputcan be directed to the appropriate device at run time.

https://sites.astro.caltech.edu/~tjp/pgplot/

"},{"location":"available_software/detail/PGPLOT/#available-modules","title":"Available modules","text":"

The overview below shows which PGPLOT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PGPLOT, load one of these modules using a module load command like:

module load PGPLOT/5.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PGPLOT/5.2.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/PLUMED/","title":"PLUMED","text":"

PLUMED is an open source library for free energy calculations in molecular systems which works together with some of the most popular molecular dynamics engines. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD. The software, written in C++, can be easily interfaced with both fortran and C/C++ codes.

https://www.plumed.org

"},{"location":"available_software/detail/PLUMED/#available-modules","title":"Available modules","text":"

The overview below shows which PLUMED installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PLUMED, load one of these modules using a module load command like:

module load PLUMED/2.9.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLUMED/2.9.2-foss-2023b x x x x x x x x x PLUMED/2.9.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PLY/","title":"PLY","text":"

PLY is yet another implementation of lex and yacc for Python.

https://www.dabeaz.com/ply/

"},{"location":"available_software/detail/PLY/#available-modules","title":"Available modules","text":"

The overview below shows which PLY installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PLY, load one of these modules using a module load command like:

module load PLY/3.11-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PLY/3.11-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PMIx/","title":"PMIx","text":"

Process Management for Exascale EnvironmentsPMI Exascale (PMIx) represents an attempt toprovide an extended version of the PMI standard specifically designedto support clusters up to and including exascale sizes. The overallobjective of the project is not to branch the existing pseudo-standarddefinitions - in fact, PMIx fully supports both of the existing PMI-1and PMI-2 APIs - but rather to (a) augment and extend those APIs toeliminate some current restrictions that impact scalability, and (b)provide a reference implementation of the PMI-server that demonstratesthe desired level of scalability.

https://pmix.org/

"},{"location":"available_software/detail/PMIx/#available-modules","title":"Available modules","text":"

The overview below shows which PMIx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PMIx, load one of these modules using a module load command like:

module load PMIx/4.2.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PMIx/4.2.6-GCCcore-13.2.0 x x x x x x x x x PMIx/4.2.4-GCCcore-12.3.0 x x x x x x x x x PMIx/4.2.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PROJ/","title":"PROJ","text":"

Program proj is a standard Unix filter function which convertsgeographic longitude and latitude coordinates into cartesian coordinates

https://proj.org

"},{"location":"available_software/detail/PROJ/#available-modules","title":"Available modules","text":"

The overview below shows which PROJ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PROJ, load one of these modules using a module load command like:

module load PROJ/9.3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PROJ/9.3.1-GCCcore-13.2.0 x x x x x x x x x PROJ/9.2.0-GCCcore-12.3.0 x x x x x x x x x PROJ/9.1.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Pango/","title":"Pango","text":"

Pango is a library for laying out and rendering of text, with an emphasis on internationalization.Pango can be used anywhere that text layout is needed, though most of the work on Pango so far has been done in thecontext of the GTK+ widget toolkit. Pango forms the core of text and font handling for GTK+-2.x.

https://www.pango.org/

"},{"location":"available_software/detail/Pango/#available-modules","title":"Available modules","text":"

The overview below shows which Pango installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pango, load one of these modules using a module load command like:

module load Pango/1.51.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pango/1.51.0-GCCcore-13.2.0 x x x x x x x x x Pango/1.50.14-GCCcore-12.3.0 x x x x x x x x x Pango/1.50.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ParMETIS/","title":"ParMETIS","text":"

ParMETIS is an MPI-based parallel library that implements a variety of algorithms for partitioning unstructured graphs, meshes, and for computing fill-reducing orderings of sparse matrices. ParMETIS extends the functionality provided by METIS and includes routines that are especially suited for parallel AMR computations and large scale numerical simulations. The algorithms implemented in ParMETIS are based on the parallel multilevel k-way graph-partitioning, adaptive repartitioning, and parallel multi-constrained partitioning schemes.

http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview

"},{"location":"available_software/detail/ParMETIS/#available-modules","title":"Available modules","text":"

The overview below shows which ParMETIS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ParMETIS, load one of these modules using a module load command like:

module load ParMETIS/4.0.3-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParMETIS/4.0.3-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/ParaView/","title":"ParaView","text":"

ParaView is a scientific parallel visualizer.

https://www.paraview.org

"},{"location":"available_software/detail/ParaView/#available-modules","title":"Available modules","text":"

The overview below shows which ParaView installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ParaView, load one of these modules using a module load command like:

module load ParaView/5.11.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ParaView/5.11.2-foss-2023a x x x x x x x x x ParaView/5.11.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/Paraver/","title":"Paraver","text":"

A very powerful performance visualization and analysis tool based on traces that can be used to analyse any information that is expressed on its input trace format. Traces for parallel MPI, OpenMP and other programs can be genereated with Extrae.

https://tools.bsc.es/paraver

"},{"location":"available_software/detail/Paraver/#available-modules","title":"Available modules","text":"

The overview below shows which Paraver installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Paraver, load one of these modules using a module load command like:

module load Paraver/4.11.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Paraver/4.11.4-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Perl-bundle-CPAN/","title":"Perl-bundle-CPAN","text":"

A set of common packages from CPAN

https://www.perl.org/

"},{"location":"available_software/detail/Perl-bundle-CPAN/#available-modules","title":"Available modules","text":"

The overview below shows which Perl-bundle-CPAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Perl-bundle-CPAN, load one of these modules using a module load command like:

module load Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Perl-bundle-CPAN/#perl-bundle-cpan5361-gcccore-1230","title":"Perl-bundle-CPAN/5.36.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.335, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::COW-0.007, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.008, Business::ISBN::Data-20230516.001, Canary::Stability-2013, Capture::Tiny-0.48, Carp::Clan-6.08, Carp::Heavy-1.50, CGI-4.57, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Method::Modifiers-2.15, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Class::XSAccessor-1.19, Clone-0.46, Clone::Choose-0.010, common::sense-3.75, Compress::Raw::Zlib-2.204, Config::General-2.65, Config::INI-0.029, Config::MVP-2.200013, Config::MVP::Reader::INI-2.101465, Config::Simple-4.58, Config::Tiny-2.29, Const::Exporter-1.2.2, Const::Fast-0.014, CPAN::Meta::Check-0.017, CPAN::Uploader-0.103018, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.114, Data::Section-0.200008, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.59, DateTime::Locale-1.38, DateTime::TimeZone-2.60, DateTime::Tiny-1.07, DBD::CSV-0.60, DBD::SQLite-1.72, DBI-1.643, DBIx::Admin::CreateTable-2.11, DBIx::Admin::DSNManager-2.02, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::FindPerl-0.016, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.030, Email::Date::Format-1.008, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter::Declare-0.114, Exporter::Tiny-1.006002, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.70, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.014, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long::Descriptive-0.111, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9726, Graph::ReadWrite-2.10, Hash::Merge-0.302, Hash::Objectify-0.008, Heap-0.80, Hook::LexWrap-0.26, HTML::Entities::Interpolate-1.10, HTML::Form-6.11, HTML::Parser-3.81, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::CookieJar-0.014, HTTP::Cookies-6.10, HTTP::Daemon-6.16, HTTP::Date-6.05, HTTP::Message-6.44, HTTP::Negotiate-6.01, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::Compress::Zip-2.204, IO::HTML-1.004, IO::Socket::SSL-2.083, IO::String-1.08, IO::Stringy-2.113, IO::TieCombine-1.005, IO::Tty-1.17, IO::Tty-1.17, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.10, JSON::MaybeXS-1.004005, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.59, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.71, Log::Dispatch::Array-1.005, Log::Dispatchouli-3.002, Log::Handler-0.90, Log::Log4perl-1.57, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.34, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.70, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.884, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-v1.013.1, MIME::Lite-3.033, MIME::Types-2.24, Mixin::Linewise::Readers-0.111, Mock::Quick-1.111, Module::Build-0.4234, Module::Build::Tiny-0.045, Module::Build::XSUtil-0.19, Module::CoreList-5.20230423, Module::Implementation-0.09, Module::Install-1.21, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000038, Module::Path-0.19, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005005, Moose-2.2203, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.007, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.203, MooseX::Types-0.50, MooseX::Types::Perl-0.101344, Mouse-v2.5.10, Mozilla::CA-20221114, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.15, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.18, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.31, Params::ValidationCompiler-0.31, parent-0.241, Parse::RecDescent-1.967015, Parse::Yapp-1.21, Path::Tiny-0.144, PDF::API2-2.044, Perl::OSType-1.010, Perl::PrereqScanner-1.100, PerlIO::utf8_strict-0.010, Pod::Elemental-0.103006, Pod::Escapes-1.07, Pod::Eventual-0.094003, Pod::LaTeX-0.61, Pod::Man-5.01, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.45, Pod::Weaver-4.019, PPI-1.276, Readonly-2.05, Ref::Util-0.204, Regexp::Common-2017060201, Role::HasMessage-0.007, Role::Identifiable::HasIdent-0.009, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104003, Specio-0.48, Spiffy-0.46, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Errf-0.009, String::Flogger-1.101246, String::Formatter-1.235, String::Print-0.94, String::RewritePrefix-0.009, String::Truncate-1.100603, String::TtyLength-0.03, Sub::Exporter-0.989, Sub::Exporter::ForMethods-0.100055, Sub::Exporter::GlobExporter-0.006, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.929, Sub::Name-0.27, Sub::Quote-2.006008, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Linux::Device::CPU-0.7905, Sys::Info::Driver::Unknown-0.79, Sys::Info::Driver::Unknown::Device::CPU-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.45, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000155, Test::Base-0.89, Test::CheckDeps-0.010, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.204, Test::Differences-0.69, Test::Exception-0.43, Test::FailWarnings-0.008, Test::Fatal-0.017, Test::File-1.993, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More::UTF8-0.05, Test::Most-0.38, Test::Needs-0.002010, Test::NoWarnings-1.06, Test::Object-0.08, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302195, Test::SubCalls-1.10, Test::Sys::Info-0.23, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.21, Test::YAML-1.07, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.50, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::Soundex-3.05, Text::Table-1.135, Text::Table::Manifold-1.03, Text::Template-1.61, Throwable-1.001, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.35, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Type::Tiny-2.004000, Types::Serialiser-1.01, Types::Serialiser-1.01, Unicode::EastAsianWidth-12.0, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, Unix::Processors-2.046, URI-5.19, Variable::Magic-0.63, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::Writer-0.900, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.74

"},{"location":"available_software/detail/Perl/","title":"Perl","text":"

Larry Wall's Practical Extraction and Report LanguageIncludes a small selection of extra CPAN packages for core functionality.

https://www.perl.org/

"},{"location":"available_software/detail/Perl/#available-modules","title":"Available modules","text":"

The overview below shows which Perl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Perl, load one of these modules using a module load command like:

module load Perl/5.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Perl/5.38.0-GCCcore-13.2.0 x x x x x x x x x Perl/5.36.1-GCCcore-12.3.0 x x x x x x x x x Perl/5.36.0-GCCcore-12.2.0-minimal x x x x x x - x x Perl/5.36.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Perl/#perl5380-gcccore-1320","title":"Perl/5.38.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21

"},{"location":"available_software/detail/Perl/#perl5361-gcccore-1230","title":"Perl/5.36.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Carp-1.50, constant-1.33, Data::Dumper-2.183, Exporter-5.77, File::Path-2.18, File::Spec-3.75, Getopt::Long-2.54, IO::File-1.51, Text::ParseWords-3.31, Thread::Queue-3.13, threads-2.21

"},{"location":"available_software/detail/Perl/#perl5360-gcccore-1220","title":"Perl/5.36.0-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Algorithm::Dependency-1.112, Algorithm::Diff-1.201, aliased-0.34, AnyEvent-7.17, App::Cmd-0.334, App::cpanminus-1.7046, AppConfig-1.71, Archive::Extract-0.88, Array::Transpose-0.06, Array::Utils-0.5, Authen::NTLM-1.09, Authen::SASL-2.16, AutoLoader-5.74, B::Hooks::EndOfScope-0.26, B::Lint-1.20, boolean-0.46, Business::ISBN-3.007, Business::ISBN::Data-20210112.006, Canary::Stability-2013, Capture::Tiny-0.48, Carp-1.50, Carp::Clan-6.08, Carp::Heavy-1.50, Class::Accessor-0.51, Class::Data::Inheritable-0.09, Class::DBI-v3.0.17, Class::DBI::SQLite-0.11, Class::Inspector-1.36, Class::ISA-0.36, Class::Load-0.25, Class::Load::XS-0.10, Class::Singleton-1.6, Class::Tiny-1.008, Class::Trigger-0.15, Clone-0.45, Clone::Choose-0.010, common::sense-3.75, Config::General-2.65, Config::INI-0.027, Config::MVP-2.200012, Config::Simple-4.58, Config::Tiny-2.28, constant-1.33, CPAN::Meta::Check-0.014, CPANPLUS-0.9914, Crypt::DES-2.07, Crypt::Rijndael-1.16, Cwd-3.75, Cwd::Guard-0.05, Data::Dump-1.25, Data::Dumper-2.183, Data::Dumper::Concise-2.023, Data::Grove-0.08, Data::OptList-0.112, Data::Section-0.200007, Data::Section::Simple-0.07, Data::Stag-0.14, Data::Types-0.17, Data::UUID-1.226, Date::Handler-1.2, Date::Language-2.33, DateTime-1.58, DateTime::Locale-1.36, DateTime::TimeZone-2.53, DateTime::Tiny-1.07, DBD::CSV-0.59, DBD::SQLite-1.70, DBI-1.643, DBIx::Admin::TableInfo-3.04, DBIx::ContextualFetch-1.03, DBIx::Simple-1.37, Devel::CheckCompiler-0.07, Devel::CheckLib-1.16, Devel::Cycle-1.12, Devel::GlobalDestruction-0.14, Devel::OverloadInfo-0.007, Devel::Size-0.83, Devel::StackTrace-2.04, Digest::HMAC-1.04, Digest::MD5::File-0.08, Digest::SHA1-2.13, Dist::CheckConflicts-0.11, Dist::Zilla-6.025, Email::Date::Format-1.005, Encode-3.19, Encode::Locale-1.05, Error-0.17029, Eval::Closure-0.14, Exception::Class-1.45, Expect-1.35, Exporter-5.74, Exporter::Declare-0.114, Exporter::Tiny-1.004000, ExtUtils::CBuilder-0.280236, ExtUtils::Config-0.008, ExtUtils::Constant-0.25, ExtUtils::CppGuess-0.26, ExtUtils::Helpers-0.026, ExtUtils::InstallPaths-0.012, ExtUtils::MakeMaker-7.64, ExtUtils::ParseXS-3.44, Fennec::Lite-0.004, File::CheckTree-4.42, File::Copy::Recursive-0.45, File::Copy::Recursive::Reduced-0.006, File::Find::Rule-0.34, File::Find::Rule::Perl-1.16, File::Grep-0.02, File::HomeDir-1.006, File::Listing-6.15, File::Next-1.18, File::Path-2.18, File::pushd-1.016, File::Remove-1.61, File::ShareDir-1.118, File::ShareDir::Install-0.14, File::Slurp-9999.32, File::Slurp::Tiny-0.004, File::Slurper-0.013, File::Spec-3.75, File::Temp-0.2311, File::Which-1.27, Font::TTF-1.06, Getopt::Long-2.52, Getopt::Long::Descriptive-0.110, Git-0.42, GO-0.04, GO::Utils-0.15, Graph-0.9725, Graph::ReadWrite-2.10, Hash::Merge-0.302, Heap-0.80, HTML::Entities::Interpolate-1.10, HTML::Form-6.10, HTML::Parser-3.78, HTML::Tagset-3.20, HTML::Template-2.97, HTML::Tree-5.07, HTTP::Cookies-6.10, HTTP::Daemon-6.14, HTTP::Date-6.05, HTTP::Negotiate-6.01, HTTP::Request-6.37, HTTP::Tiny-0.082, if-0.0608, Ima::DBI-0.35, Import::Into-1.002005, Importer-0.026, Inline-0.86, IO::HTML-1.004, IO::Socket::SSL-2.075, IO::String-1.08, IO::Stringy-2.113, IO::Tty-1.16, IPC::Cmd-1.04, IPC::Run-20220807.0, IPC::Run3-0.048, IPC::System::Simple-1.30, JSON-4.09, JSON::XS-4.03, Lingua::EN::PluralToSingular-0.21, List::AllUtils-0.19, List::MoreUtils-0.430, List::MoreUtils::XS-0.430, List::SomeUtils-0.58, List::Util-1.63, List::UtilsBy-0.12, local::lib-2.000029, Locale::Maketext::Simple-0.21, Log::Dispatch-2.70, Log::Dispatchouli-2.023, Log::Handler-0.90, Log::Log4perl-1.56, Log::Message-0.08, Log::Message::Simple-0.10, Log::Report-1.33, Log::Report::Optional-1.07, Logger::Simple-2.0, LWP::MediaTypes-6.04, LWP::Protocol::https-6.10, LWP::Simple-6.67, Mail::Util-2.21, Math::Bezier-0.01, Math::CDF-0.1, Math::Round-0.07, Math::Utils-1.14, Math::VecStat-0.08, MCE::Mutex-1.879, Meta::Builder-0.004, MIME::Base64-3.16, MIME::Charset-1.013.1, MIME::Lite-3.033, MIME::Types-2.22, Mixin::Linewise::Readers-0.110, Mock::Quick-1.111, Module::Build-0.4231, Module::Build::Tiny-0.039, Module::Build::XSUtil-0.19, Module::CoreList-5.20220820, Module::Implementation-0.09, Module::Install-1.19, Module::Load-0.36, Module::Load::Conditional-0.74, Module::Metadata-1.000037, Module::Path-0.19, Module::Pluggable-5.2, Module::Runtime-0.016, Module::Runtime::Conflicts-0.003, Moo-2.005004, Moose-2.2201, MooseX::LazyRequire-0.11, MooseX::OneArgNew-0.006, MooseX::Role::Parameterized-1.11, MooseX::SetOnce-0.201, MooseX::Types-0.50, MooseX::Types::Perl-0.101343, Mouse-v2.5.10, Mozilla::CA-20211001, MRO::Compat-0.15, namespace::autoclean-0.29, namespace::clean-0.27, Net::Domain-3.14, Net::HTTP-6.22, Net::SMTP::SSL-1.04, Net::SNMP-v6.0.1, Net::SSLeay-1.92, Number::Compare-0.03, Number::Format-1.75, Object::Accessor-0.48, Object::InsideOut-4.05, Package::Constants-0.06, Package::DeprecationManager-0.17, Package::Stash-0.40, Package::Stash::XS-0.30, PadWalker-2.5, Parallel::ForkManager-2.02, Params::Check-0.38, Params::Util-1.102, Params::Validate-1.30, Params::ValidationCompiler-0.30, parent-0.238, Parse::RecDescent-1.967015, Path::Tiny-0.124, PDF::API2-2.043, Perl::OSType-1.010, PerlIO::utf8_strict-0.009, Pod::Elemental-0.103005, Pod::Escapes-1.07, Pod::Eventual-0.094002, Pod::LaTeX-0.61, Pod::Man-4.14, Pod::Parser-1.66, Pod::Plainer-1.04, Pod::POM-2.01, Pod::Simple-3.43, Pod::Weaver-4.018, Readonly-2.05, Regexp::Common-2017060201, Role::HasMessage-0.006, Role::Identifiable::HasIdent-0.008, Role::Tiny-2.002004, Scalar::Util-1.63, Scalar::Util::Numeric-0.40, Scope::Guard-0.21, Set::Array-0.30, Set::IntervalTree-0.12, Set::IntSpan-1.19, Set::IntSpan::Fast-1.15, Set::Object-1.42, Set::Scalar-1.29, Shell-0.73, Socket-2.036, Software::License-0.104002, Specio-0.48, SQL::Abstract-2.000001, SQL::Statement-1.414, Statistics::Basic-1.6611, Statistics::Descriptive-3.0800, Storable-3.25, strictures-2.000006, String::Flogger-1.101245, String::Print-0.94, String::RewritePrefix-0.008, String::Truncate-1.100602, Sub::Exporter-0.988, Sub::Exporter::ForMethods-0.100054, Sub::Exporter::Progressive-0.001013, Sub::Identify-0.14, Sub::Info-0.002, Sub::Install-0.928, Sub::Name-0.26, Sub::Quote-2.006006, Sub::Uplevel-0.2800, Sub::Uplevel-0.2800, SVG-2.87, Switch-2.17, Sys::Info-0.7811, Sys::Info::Base-0.7807, Sys::Info::Driver::Linux-0.7905, Sys::Info::Driver::Unknown-0.79, Template-3.101, Template::Plugin::Number::Format-1.06, Term::Encoding-0.03, Term::ReadKey-2.38, Term::ReadLine::Gnu-1.42, Term::Table-0.016, Term::UI-0.50, Test-1.26, Test2::Plugin::NoWarnings-0.09, Test2::Require::Module-0.000145, Test::ClassAPI-1.07, Test::CleanNamespaces-0.24, Test::Deep-1.130, Test::Differences-0.69, Test::Exception-0.43, Test::Fatal-0.016, Test::File::ShareDir::Dist-1.001002, Test::Harness-3.44, Test::LeakTrace-0.17, Test::Memory::Cycle-1.06, Test::More-1.302191, Test::More::UTF8-0.05, Test::Most-0.37, Test::Needs-0.002009, Test::NoWarnings-1.06, Test::Output-1.033, Test::Pod-1.52, Test::Requires-0.11, Test::RequiresInternet-0.05, Test::Simple-1.302191, Test::Version-2.09, Test::Warn-0.37, Test::Warnings-0.031, Test::Without::Module-0.20, Text::Aligner-0.16, Text::Balanced-2.06, Text::CSV-2.02, Text::CSV_XS-1.48, Text::Diff-1.45, Text::Format-0.62, Text::Glob-0.11, Text::Iconv-1.7, Text::ParseWords-3.31, Text::Soundex-3.05, Text::Table-1.134, Text::Template-1.61, Thread::Queue-3.13, Throwable-1.000, Tie::Function-0.02, Tie::IxHash-1.23, Time::HiRes-1.9764, Time::Local-1.30, Time::Piece-1.3401, Time::Piece::MySQL-0.06, Tree::DAG_Node-1.32, Try::Tiny-0.31, Types::Serialiser-1.01, Unicode::LineBreak-2019.001, UNIVERSAL::moniker-0.08, Unix::Processors-2.046, URI-5.12, URI::Escape-5.12, Variable::Magic-0.62, version-0.9929, Want-0.29, WWW::RobotRules-6.02, XML::Bare-0.53, XML::DOM-1.46, XML::Filter::BufferText-1.01, XML::NamespaceSupport-1.12, XML::Parser-2.46, XML::RegExp-0.04, XML::SAX-1.02, XML::SAX::Base-1.09, XML::SAX::Expat-0.51, XML::SAX::Writer-0.57, XML::Simple-2.25, XML::Tiny-2.07, XML::Twig-3.52, XML::XPath-1.48, XSLoader-0.24, YAML-1.30, YAML::Tiny-1.73

"},{"location":"available_software/detail/Pillow-SIMD/","title":"Pillow-SIMD","text":"

Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.

https://github.com/uploadcare/pillow-simd

"},{"location":"available_software/detail/Pillow-SIMD/#available-modules","title":"Available modules","text":"

The overview below shows which Pillow-SIMD installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pillow-SIMD, load one of these modules using a module load command like:

module load Pillow-SIMD/9.5.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow-SIMD/9.5.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pillow/","title":"Pillow","text":"

Pillow is the 'friendly PIL fork' by Alex Clark and Contributors. PIL is the Python Imaging Library by Fredrik Lundh and Contributors.

https://pillow.readthedocs.org/

"},{"location":"available_software/detail/Pillow/#available-modules","title":"Available modules","text":"

The overview below shows which Pillow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pillow, load one of these modules using a module load command like:

module load Pillow/10.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pillow/10.2.0-GCCcore-13.2.0 x x x x x x x x x Pillow/10.0.0-GCCcore-12.3.0 x x x x x x x x x Pillow/9.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Pint/","title":"Pint","text":"

Pint is a Python package to define, operate andmanipulate physical quantities: the product of a numerical value and aunit of measurement. It allows arithmetic operations between them andconversions from and to different units.

https://github.com/hgrecco/pint

"},{"location":"available_software/detail/Pint/#available-modules","title":"Available modules","text":"

The overview below shows which Pint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pint, load one of these modules using a module load command like:

module load Pint/0.24-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pint/0.24-GCCcore-13.2.0 x x x x x x x x x Pint/0.23-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pint/#pint024-gcccore-1320","title":"Pint/0.24-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

appdirs-1.4.4, flexcache-0.3, flexparser-0.3.1, Pint-0.24

"},{"location":"available_software/detail/PostgreSQL/","title":"PostgreSQL","text":"

PostgreSQL is a powerful, open source object-relational database system. It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages). It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video. It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

https://www.postgresql.org/

"},{"location":"available_software/detail/PostgreSQL/#available-modules","title":"Available modules","text":"

The overview below shows which PostgreSQL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PostgreSQL, load one of these modules using a module load command like:

module load PostgreSQL/16.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PostgreSQL/16.1-GCCcore-13.2.0 x x x x x x x x x PostgreSQL/16.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PuLP/","title":"PuLP","text":"

PuLP is an LP modeler written in Python. PuLP can generate MPS or LP files andcall GLPK, COIN-OR CLP/CBC, CPLEX, GUROBI, MOSEK, XPRESS, CHOCO, MIPCL, SCIP tosolve linear problems.

https://github.com/coin-or/pulp

"},{"location":"available_software/detail/PuLP/#available-modules","title":"Available modules","text":"

The overview below shows which PuLP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PuLP, load one of these modules using a module load command like:

module load PuLP/2.8.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PuLP/2.8.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PyOpenGL/","title":"PyOpenGL","text":"

PyOpenGL is the most common cross platform Python binding to OpenGL and related APIs.

http://pyopengl.sourceforge.net

"},{"location":"available_software/detail/PyOpenGL/#available-modules","title":"Available modules","text":"

The overview below shows which PyOpenGL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyOpenGL, load one of these modules using a module load command like:

module load PyOpenGL/3.1.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyOpenGL/3.1.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyOpenGL/#pyopengl317-gcccore-1230","title":"PyOpenGL/3.1.7-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

PyOpenGL-3.1.7, PyOpenGL-accelerate-3.1.7

"},{"location":"available_software/detail/PyQt-builder/","title":"PyQt-builder","text":"

PyQt-builder is the PEP 517 compliant build system for PyQt and projects that extend PyQt. It extends the SIP build system and uses Qt\u2019s qmake to perform the actual compilation and installation of extension modules.

http://www.example.com

"},{"location":"available_software/detail/PyQt-builder/#available-modules","title":"Available modules","text":"

The overview below shows which PyQt-builder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyQt-builder, load one of these modules using a module load command like:

module load PyQt-builder/1.15.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt-builder/1.15.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyQt-builder/#pyqt-builder1154-gcccore-1230","title":"PyQt-builder/1.15.4-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

PyQt-builder-1.15.4

"},{"location":"available_software/detail/PyQt5/","title":"PyQt5","text":"

PyQt5 is a set of Python bindings for v5 of the Qt application framework from The Qt Company.This bundle includes PyQtWebEngine, a set of Python bindings for The Qt Company\u2019s Qt WebEngine framework.

https://www.riverbankcomputing.com/software/pyqt

"},{"location":"available_software/detail/PyQt5/#available-modules","title":"Available modules","text":"

The overview below shows which PyQt5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyQt5, load one of these modules using a module load command like:

module load PyQt5/5.15.10-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyQt5/5.15.10-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/PyTorch/","title":"PyTorch","text":"

Tensors and Dynamic neural networks in Python with strong GPU acceleration.PyTorch is a deep learning framework that puts Python first.

https://pytorch.org/

"},{"location":"available_software/detail/PyTorch/#available-modules","title":"Available modules","text":"

The overview below shows which PyTorch installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyTorch, load one of these modules using a module load command like:

module load PyTorch/2.1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyTorch/2.1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/PyYAML/","title":"PyYAML","text":"

PyYAML is a YAML parser and emitter for the Python programming language.

https://github.com/yaml/pyyaml

"},{"location":"available_software/detail/PyYAML/#available-modules","title":"Available modules","text":"

The overview below shows which PyYAML installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyYAML, load one of these modules using a module load command like:

module load PyYAML/6.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyYAML/6.0.1-GCCcore-13.2.0 x x x x x x x x x PyYAML/6.0-GCCcore-12.3.0 x x x x x x x x x PyYAML/6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/PyZMQ/","title":"PyZMQ","text":"

Python bindings for ZeroMQ

https://www.zeromq.org/bindings:python

"},{"location":"available_software/detail/PyZMQ/#available-modules","title":"Available modules","text":"

The overview below shows which PyZMQ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using PyZMQ, load one of these modules using a module load command like:

module load PyZMQ/25.1.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 PyZMQ/25.1.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pygments/","title":"Pygments","text":"

Generic syntax highlighter suitable for use in code hosting, forums, wikis or other applications that need to prettify source code.

https://pygments.org/

"},{"location":"available_software/detail/Pygments/#available-modules","title":"Available modules","text":"

The overview below shows which Pygments installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pygments, load one of these modules using a module load command like:

module load Pygments/2.18.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pygments/2.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Pysam/","title":"Pysam","text":"

Pysam is a python module for reading and manipulating Samfiles. It's a lightweight wrapper of the samtools C-API. Pysam also includes an interface for tabix.

https://github.com/pysam-developers/pysam

"},{"location":"available_software/detail/Pysam/#available-modules","title":"Available modules","text":"

The overview below shows which Pysam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Pysam, load one of these modules using a module load command like:

module load Pysam/0.22.0-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Pysam/0.22.0-GCC-12.3.0 x x x x x x x x x Pysam/0.21.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Python-bundle-PyPI/","title":"Python-bundle-PyPI","text":"

Bundle of Python packages from PyPI

https://python.org/

"},{"location":"available_software/detail/Python-bundle-PyPI/#available-modules","title":"Available modules","text":"

The overview below shows which Python-bundle-PyPI installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Python-bundle-PyPI, load one of these modules using a module load command like:

module load Python-bundle-PyPI/2023.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python-bundle-PyPI/2023.10-GCCcore-13.2.0 x x x x x x x x x Python-bundle-PyPI/2023.06-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202310-gcccore-1320","title":"Python-bundle-PyPI/2023.10-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.13.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.6, bitarray-2.8.2, bitstring-4.1.2, blist-1.3.6, cachecontrol-0.13.1, cachy-0.3.0, certifi-2023.7.22, cffi-1.16.0, chardet-5.2.0, charset-normalizer-3.3.1, cleo-2.0.1, click-8.1.7, cloudpickle-3.0.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-3.0.4, decorator-5.1.1, distlib-0.3.7, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.6, ecdsa-0.18.0, editables-0.5, exceptiongroup-1.1.3, execnet-2.0.2, filelock-3.13.0, fsspec-2023.10.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.8.0, importlib_resources-6.1.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.3.0, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.3.2, jsonschema-4.17.3, keyring-24.2.0, keyrings.alt-5.0.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.1.0, more-itertools-10.1.0, msgpack-1.0.7, netaddr-0.9.0, netifaces-0.11.0, packaging-23.2, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.2, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, pluggy-1.3.0, pooch-1.8.0, psutil-5.9.6, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.19.0, pydevtool-0.3.0, Pygments-2.16.1, Pygments-2.16.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.1, pyrsistent-0.20.0, pytest-7.4.3, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3.post1, rapidfuzz-2.15.2, regex-2023.10.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.6.0, rich-click-1.7.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.4, simplegeneric-0.8.1, simplejson-3.19.2, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, sphinx-7.2.6, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib_applehelp-1.0.7, sphinxcontrib_devhelp-1.0.5, sphinxcontrib_htmlhelp-2.0.4, sphinxcontrib_qthelp-1.0.6, sphinxcontrib_serializinghtml-1.1.9, sphinxcontrib_websupport-1.2.6, tabulate-0.9.0, threadpoolctl-3.2.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.12.1, ujson-5.8.0, urllib3-2.0.7, wcwidth-0.2.8, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.17.0

"},{"location":"available_software/detail/Python-bundle-PyPI/#python-bundle-pypi202306-gcccore-1230","title":"Python-bundle-PyPI/2023.06-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.13, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-23.1.0, Babel-2.12.1, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.5, bitstring-4.0.2, blist-1.3.6, CacheControl-0.12.14, cachy-0.3.0, certifi-2023.5.7, cffi-1.15.1, chardet-5.1.0, charset-normalizer-3.1.0, cleo-2.0.1, click-8.1.3, cloudpickle-2.2.1, colorama-0.4.6, commonmark-0.9.1, crashtest-0.4.1, Cython-0.29.35, decorator-5.1.1, distlib-0.3.6, distro-1.8.0, docopt-0.6.2, docutils-0.20.1, doit-0.36.0, dulwich-0.21.5, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.1.1, execnet-1.9.0, filelock-3.12.2, fsspec-2023.6.0, future-0.18.3, glob2-0.7, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-6.7.0, importlib_resources-5.12.0, iniconfig-2.0.0, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.3, keyring-23.13.1, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, markdown-it-py-3.0.0, MarkupSafe-2.1.3, mdurl-0.1.2, mock-5.0.2, more-itertools-9.1.0, msgpack-1.0.5, netaddr-0.8.0, netifaces-0.11.0, packaging-23.1, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.11.1, pbr-5.11.1, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, pluggy-1.2.0, pooch-1.7.0, psutil-5.9.5, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.5.0, pycparser-2.21, pycryptodome-3.18.0, pydevtool-0.3.0, Pygments-2.15.1, Pygments-2.15.1, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.1.0, pyrsistent-0.19.3, pytest-7.4.0, pytest-xdist-3.3.1, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2023.3, rapidfuzz-2.15.1, regex-2023.6.3, requests-2.31.0, requests-toolbelt-1.0.0, rich-13.4.2, rich-click-1.6.1, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, shellingham-1.5.0.post1, simplegeneric-0.8.1, simplejson-3.19.1, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-7.0.1, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.4, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.1, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.8, ujson-5.8.0, urllib3-1.26.16, wcwidth-0.2.6, webencodings-0.5.1, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.15.0

"},{"location":"available_software/detail/Python/","title":"Python","text":"

Python is a programming language that lets you work more quickly and integrate your systems more effectively.

https://python.org/

"},{"location":"available_software/detail/Python/#available-modules","title":"Available modules","text":"

The overview below shows which Python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Python, load one of these modules using a module load command like:

module load Python/3.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Python/3.11.5-GCCcore-13.2.0 x x x x x x x x x Python/3.11.3-GCCcore-12.3.0 x x x x x x x x x Python/3.10.8-GCCcore-12.2.0-bare x x x x x x - x x Python/3.10.8-GCCcore-12.2.0 x x x x x x - x x Python/2.7.18-GCCcore-12.2.0-bare x x x x x x - x x"},{"location":"available_software/detail/Python/#python3115-gcccore-1320","title":"Python/3.11.5-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

flit_core-3.9.0, packaging-23.2, pip-23.2.1, setuptools-68.2.2, setuptools-scm-8.0.4, tomli-2.0.1, typing_extensions-4.8.0, wheel-0.41.2

"},{"location":"available_software/detail/Python/#python3113-gcccore-1230","title":"Python/3.11.3-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

flit_core-3.9.0, packaging-23.1, pip-23.1.2, setuptools-67.7.2, setuptools_scm-7.1.0, tomli-2.0.1, typing_extensions-4.6.3, wheel-0.40.0

"},{"location":"available_software/detail/Python/#python3108-gcccore-1220","title":"Python/3.10.8-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

alabaster-0.7.12, appdirs-1.4.4, asn1crypto-1.5.1, atomicwrites-1.4.1, attrs-22.1.0, Babel-2.11.0, backports.entry-points-selectable-1.2.0, backports.functools_lru_cache-1.6.4, bcrypt-4.0.1, bitstring-3.1.9, blist-1.3.6, CacheControl-0.12.11, cachy-0.3.0, certifi-2022.9.24, cffi-1.15.1, chardet-5.0.0, charset-normalizer-2.1.1, cleo-1.0.0a5, click-8.1.3, clikit-0.6.2, cloudpickle-2.2.0, colorama-0.4.6, commonmark-0.9.1, crashtest-0.3.1, cryptography-38.0.3, Cython-0.29.32, decorator-5.1.1, distlib-0.3.6, docopt-0.6.2, docutils-0.19, doit-0.36.0, dulwich-0.20.50, ecdsa-0.18.0, editables-0.3, exceptiongroup-1.0.1, execnet-1.9.0, filelock-3.8.0, flit-3.8.0, flit_core-3.8.0, flit_scm-1.7.0, fsspec-2022.11.0, future-0.18.2, glob2-0.7, hatch_fancy_pypi_readme-22.8.0, hatch_vcs-0.2.0, hatchling-1.11.1, html5lib-1.1, idna-3.4, imagesize-1.4.1, importlib_metadata-5.0.0, importlib_resources-5.10.0, iniconfig-1.1.1, intervaltree-3.1.0, intreehooks-1.0, ipaddress-1.0.23, jaraco.classes-3.2.3, jeepney-0.8.0, Jinja2-3.1.2, joblib-1.2.0, jsonschema-4.17.0, keyring-23.11.0, keyrings.alt-4.2.0, liac-arff-2.5.0, lockfile-0.12.2, MarkupSafe-2.1.1, mock-4.0.3, more-itertools-9.0.0, msgpack-1.0.4, netaddr-0.8.0, netifaces-0.11.0, packaging-21.3, paramiko-2.12.0, pastel-0.2.1, pathlib2-2.3.7.post1, pathspec-0.10.1, pbr-5.11.0, pexpect-4.8.0, pip-22.3.1, pkginfo-1.8.3, platformdirs-2.5.3, pluggy-1.0.0, poetry-1.2.2, poetry-core-1.3.2, poetry_plugin_export-1.2.0, pooch-1.6.0, psutil-5.9.4, ptyprocess-0.7.0, py-1.11.0, py_expression_eval-0.3.14, pyasn1-0.4.8, pycparser-2.21, pycryptodome-3.17, pydevtool-0.3.0, Pygments-2.13.0, pylev-1.4.0, PyNaCl-1.5.0, pyparsing-3.0.9, pyrsistent-0.19.2, pytest-7.2.0, pytest-xdist-3.1.0, python-dateutil-2.8.2, pytoml-0.1.21, pytz-2022.6, regex-2022.10.31, requests-2.28.1, requests-toolbelt-0.9.1, rich-13.1.0, rich-click-1.6.0, scandir-1.10.0, SecretStorage-3.3.3, semantic_version-2.10.0, setuptools-63.4.3, setuptools-rust-1.5.2, setuptools_scm-7.0.5, shellingham-1.5.0, simplegeneric-0.8.1, simplejson-3.17.6, six-1.16.0, snowballstemmer-2.2.0, sortedcontainers-2.4.0, Sphinx-5.3.0, sphinx-bootstrap-theme-0.8.1, sphinxcontrib-applehelp-1.0.2, sphinxcontrib-devhelp-1.0.2, sphinxcontrib-htmlhelp-2.0.0, sphinxcontrib-jsmath-1.0.1, sphinxcontrib-qthelp-1.0.3, sphinxcontrib-serializinghtml-1.1.5, sphinxcontrib-websupport-1.2.4, tabulate-0.9.0, threadpoolctl-3.1.0, toml-0.10.2, tomli-2.0.1, tomli_w-1.0.0, tomlkit-0.11.6, typing_extensions-4.4.0, ujson-5.5.0, urllib3-1.26.12, virtualenv-20.16.6, wcwidth-0.2.5, webencodings-0.5.1, wheel-0.38.4, xlrd-2.0.1, zipfile36-0.1.3, zipp-3.10.0

"},{"location":"available_software/detail/Qhull/","title":"Qhull","text":"

Qhull computes the convex hull, Delaunay triangulation, Voronoi diagram, halfspace intersection about a point, furthest-site Delaunay triangulation, and furthest-site Voronoi diagram. The source code runs in 2-d, 3-d, 4-d, and higher dimensions. Qhull implements the Quickhull algorithm for computing the convex hull.

http://www.qhull.org

"},{"location":"available_software/detail/Qhull/#available-modules","title":"Available modules","text":"

The overview below shows which Qhull installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Qhull, load one of these modules using a module load command like:

module load Qhull/2020.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qhull/2020.2-GCCcore-13.2.0 x x x x x x x x x Qhull/2020.2-GCCcore-12.3.0 x x x x x x x x x Qhull/2020.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Qt5/","title":"Qt5","text":"

Qt is a comprehensive cross-platform C++ application framework.

https://qt.io/

"},{"location":"available_software/detail/Qt5/#available-modules","title":"Available modules","text":"

The overview below shows which Qt5 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Qt5, load one of these modules using a module load command like:

module load Qt5/5.15.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Qt5/5.15.13-GCCcore-13.2.0 x x x x x x x x x Qt5/5.15.10-GCCcore-12.3.0 x x x x x x x x x Qt5/5.15.7-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/QuantumESPRESSO/","title":"QuantumESPRESSO","text":"

Quantum ESPRESSO is an integrated suite of computer codesfor electronic-structure calculations and materials modeling at the nanoscale.It is based on density-functional theory, plane waves, and pseudopotentials(both norm-conserving and ultrasoft).

https://www.quantum-espresso.org

"},{"location":"available_software/detail/QuantumESPRESSO/#available-modules","title":"Available modules","text":"

The overview below shows which QuantumESPRESSO installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using QuantumESPRESSO, load one of these modules using a module load command like:

module load QuantumESPRESSO/7.3.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 QuantumESPRESSO/7.3.1-foss-2023a x x x x x x x x x QuantumESPRESSO/7.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/R-bundle-Bioconductor/","title":"R-bundle-Bioconductor","text":"

Bioconductor provides tools for the analysis and coprehension of high-throughput genomic data.

https://bioconductor.org

"},{"location":"available_software/detail/R-bundle-Bioconductor/#available-modules","title":"Available modules","text":"

The overview below shows which R-bundle-Bioconductor installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R-bundle-Bioconductor, load one of these modules using a module load command like:

module load R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2 x x x x x x x x x R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2 x x x x x x - x x"},{"location":"available_software/detail/R-bundle-Bioconductor/#r-bundle-bioconductor318-foss-2023a-r-432","title":"R-bundle-Bioconductor/3.18-foss-2023a-R-4.3.2","text":"

This is a list of extensions included in the module:

affxparser-1.74.0, affy-1.80.0, affycoretools-1.74.0, affyio-1.72.0, AgiMicroRna-2.52.0, agricolae-1.3-7, ALDEx2-1.34.0, ALL-1.44.0, ANCOMBC-2.4.0, annaffy-1.74.0, annotate-1.80.0, AnnotationDbi-1.64.1, AnnotationFilter-1.26.0, AnnotationForge-1.44.0, AnnotationHub-3.10.0, anytime-0.3.9, aroma.affymetrix-3.2.1, aroma.apd-0.7.0, aroma.core-3.3.0, aroma.light-3.32.0, ash-1.0-15, ATACseqQC-1.26.0, AUCell-1.24.0, aws.s3-0.3.21, aws.signature-0.6.0, babelgene-22.9, ballgown-2.34.0, basilisk-1.14.2, basilisk.utils-1.14.1, batchelor-1.18.1, baySeq-2.36.0, beachmat-2.18.0, BH-1.84.0-0, Biobase-2.62.0, BiocBaseUtils-1.4.0, BiocFileCache-2.10.1, BiocGenerics-0.48.0, BiocIO-1.12.0, BiocManager-1.30.22, BiocNeighbors-1.20.2, BiocParallel-1.36.0, BiocSingular-1.18.0, BiocStyle-2.30.0, BiocVersion-3.18.1, biomaRt-2.58.0, biomformat-1.30.0, Biostrings-2.70.0, biovizBase-1.50.0, blme-1.0-5, bluster-1.12.0, bookdown-0.37, BSgenome-1.70.1, BSgenome.Cfamiliaris.UCSC.canFam3-1.4.0, BSgenome.Hsapiens.UCSC.hg19-1.4.3, BSgenome.Hsapiens.UCSC.hg38-1.4.5, BSgenome.Mmusculus.UCSC.mm10-1.4.3, bsseq-1.38.0, bumphunter-1.44.0, ca-0.71.1, CAGEfightR-1.22.0, CAGEr-2.8.0, CAMERA-1.58.0, Category-2.68.0, ccdata-1.28.0, ccmap-1.28.0, CGHbase-1.62.0, CGHcall-2.64.0, ChIPpeakAnno-3.36.0, chromVAR-1.24.0, clusterProfiler-4.10.0, CNEr-1.38.0, coloc-5.2.3, colorRamps-2.3.1, ComplexHeatmap-2.18.0, ConsensusClusterPlus-1.66.0, conumee-1.36.0, crossmeta-1.28.0, cummeRbund-2.44.0, cytolib-2.14.1, CytoML-2.14.0, dada2-1.30.0, ddPCRclust-1.22.0, DECIPHER-2.30.0, DeconRNASeq-1.44.0, decontam-1.22.0, decoupleR-2.8.0, DEGseq-1.56.1, DelayedArray-0.28.0, DelayedMatrixStats-1.24.0, densEstBayes-1.0-2.2, derfinder-1.36.0, derfinderHelper-1.36.0, DESeq2-1.42.0, diffcyt-1.22.0, dir.expiry-1.10.0, directlabels-2024.1.21, DirichletMultinomial-1.44.0, DNABarcodes-1.32.0, DNAcopy-1.76.0, DO.db-2.9, docopt-0.7.1, DOSE-3.28.2, dqrng-0.3.2, DRIMSeq-1.30.0, DropletUtils-1.22.0, DSS-2.50.1, dupRadar-1.32.0, DynDoc-1.80.0, EBImage-4.44.0, edgeR-4.0.12, egg-0.4.5, emmeans-1.10.0, enrichplot-1.22.0, EnsDb.Hsapiens.v75-2.99.0, EnsDb.Hsapiens.v79-2.99.0, EnsDb.Hsapiens.v86-2.99.0, ensembldb-2.26.0, escape-1.12.0, estimability-1.4.1, ExperimentHub-2.10.0, extraDistr-1.10.0, factoextra-1.0.7, fANCOVA-0.6-1, fda-6.1.4, FDb.InfiniumMethylation.hg19-2.2.0, fds-1.8, feature-1.2.15, fgsea-1.28.0, filelock-1.0.3, flowAI-1.32.0, flowClean-1.40.0, flowClust-3.40.0, flowCore-2.14.0, flowDensity-1.36.1, flowFP-1.60.0, flowMerge-2.50.0, flowPeaks-1.48.0, FlowSOM-2.10.0, FlowSorted.Blood.EPIC-2.6.0, FlowSorted.CordBloodCombined.450k-1.18.0, flowStats-4.14.1, flowViz-1.66.0, flowWorkspace-4.14.2, FRASER-1.14.0, fresh-0.2.0, gcrma-2.74.0, gdsfmt-1.38.0, genefilter-1.84.0, geneLenDataBase-1.38.0, geneplotter-1.80.0, GENESIS-2.32.0, GENIE3-1.24.0, GenomeInfoDb-1.38.5, GenomeInfoDbData-1.2.11, GenomicAlignments-1.38.2, GenomicFeatures-1.54.1, GenomicFiles-1.38.0, GenomicInteractions-1.36.0, GenomicRanges-1.54.1, GenomicScores-2.14.3, GEOmap-2.5-5, GEOquery-2.70.0, ggbio-1.50.0, ggcyto-1.30.0, ggdendro-0.1.23, ggnewscale-0.4.9, ggpointdensity-0.1.0, ggrastr-1.0.2, ggseqlogo-0.1, ggthemes-5.0.0, ggtree-3.10.0, GLAD-2.66.0, Glimma-2.12.0, GlobalAncova-4.20.0, globaltest-5.56.0, GO.db-3.18.0, GOSemSim-2.28.1, goseq-1.54.0, GOstats-2.68.0, graph-1.80.0, graphite-1.48.0, GSEABase-1.64.0, gsmoothr-0.1.7, gson-0.1.0, GSVA-1.50.0, Gviz-1.46.1, GWASExactHW-1.01, GWASTools-1.48.0, HDF5Array-1.30.0, HDO.db-0.99.1, hdrcde-3.4, heatmaply-1.5.0, hgu133plus2.db-3.13.0, HiCBricks-1.20.0, HiCcompare-1.24.0, HMMcopy-1.44.0, Homo.sapiens-1.3.1, IHW-1.30.0, IlluminaHumanMethylation450kanno.ilmn12.hg19-0.6.1, IlluminaHumanMethylation450kmanifest-0.4.0, IlluminaHumanMethylationEPICanno.ilm10b2.hg19-0.6.0, IlluminaHumanMethylationEPICanno.ilm10b4.hg19-0.6.0, IlluminaHumanMethylationEPICmanifest-0.3.0, illuminaio-0.44.0, impute-1.76.0, InteractionSet-1.30.0, interactiveDisplayBase-1.40.0, intervals-0.15.4, IRanges-2.36.0, isva-1.9, JASPAR2020-0.99.10, KEGGgraph-1.62.0, KEGGREST-1.42.0, LEA-3.14.0, limma-3.58.1, log4r-0.4.3, lpsymphony-1.30.0, lsa-0.73.3, lumi-2.54.0, M3Drop-1.28.0, marray-1.80.0, maSigPro-1.74.0, MassSpecWavelet-1.68.0, MatrixGenerics-1.14.0, MBA-0.1-0, MEDIPS-1.54.0, MetaboCoreUtils-1.10.0, metagenomeSeq-1.43.0, metaMA-3.1.3, metap-1.9, metapod-1.10.1, MethylSeekR-1.42.0, methylumi-2.48.0, Mfuzz-2.62.0, mia-1.10.0, minfi-1.48.0, missMethyl-1.36.0, mixOmics-6.26.0, mixsqp-0.3-54, MLInterfaces-1.82.0, MotifDb-1.44.0, motifmatchr-1.24.0, motifStack-1.46.0, MsCoreUtils-1.14.1, MsExperiment-1.4.0, MsFeatures-1.10.0, msigdbr-7.5.1, MSnbase-2.28.1, MSstats-4.10.0, MSstatsConvert-1.12.0, MSstatsLiP-1.8.1, MSstatsPTM-2.4.2, MSstatsTMT-2.10.0, MultiAssayExperiment-1.28.0, MultiDataSet-1.30.0, multtest-2.58.0, muscat-1.16.0, mutoss-0.1-13, mzID-1.40.0, mzR-2.36.0, NADA-1.6-1.1, ncdfFlow-2.48.0, NMF-0.26, NOISeq-2.46.0, numbat-1.3.2-1, oligo-1.66.0, oligoClasses-1.64.0, ontologyIndex-2.11, oompaBase-3.2.9, oompaData-3.1.3, openCyto-2.14.0, org.Hs.eg.db-3.18.0, org.Mm.eg.db-3.18.0, org.Rn.eg.db-3.18.0, OrganismDbi-1.44.0, OUTRIDER-1.20.0, pathview-1.42.0, pcaMethods-1.94.0, perm-1.0-0.4, PFAM.db-3.18.0, phyloseq-1.46.0, plyranges-1.22.0, pmp-1.14.0, polyester-1.38.0, poweRlaw-0.70.6, preprocessCore-1.64.0, pRoloc-1.42.0, pRolocdata-1.40.0, pRolocGUI-2.12.0, ProtGenerics-1.34.0, PRROC-1.3.1, PSCBS-0.66.0, PureCN-2.8.1, qap-0.1-2, QDNAseq-1.38.0, QFeatures-1.12.0, qlcMatrix-0.9.7, qqconf-1.3.2, quantsmooth-1.68.0, qvalue-2.34.0, R.devices-2.17.1, R.filesets-2.15.0, R.huge-0.10.1, rainbow-3.8, randomcoloR-1.1.0.1, rARPACK-0.11-0, RBGL-1.78.0, RcisTarget-1.22.0, RcppAnnoy-0.0.22, RcppHNSW-0.5.0, RcppML-0.3.7, RcppZiggurat-0.1.6, reactome.db-1.86.2, ReactomePA-1.46.0, regioneR-1.34.0, reldist-1.7-2, remaCor-0.0.16, Repitools-1.48.0, ReportingTools-2.42.3, ResidualMatrix-1.12.0, restfulr-0.0.15, Rfast-2.1.0, RFOC-3.4-10, rGADEM-2.50.0, Rgraphviz-2.46.0, rhdf5-2.46.1, rhdf5filters-1.14.1, Rhdf5lib-1.24.1, Rhtslib-2.4.1, Ringo-1.66.0, RNASeqPower-1.42.0, RnBeads-2.20.0, RnBeads.hg19-1.34.0, RnBeads.hg38-1.34.0, RnBeads.mm10-2.10.0, RnBeads.mm9-1.34.0, RnBeads.rn5-1.34.0, ROC-1.78.0, rols-2.30.0, ROntoTools-2.30.0, ropls-1.34.0, RPMG-2.2-7, RProtoBufLib-2.14.0, Rsamtools-2.18.0, RSEIS-4.1-6, Rsubread-2.16.1, rsvd-1.0.5, rtracklayer-1.62.0, Rwave-2.6-5, S4Arrays-1.2.0, S4Vectors-0.40.2, samr-3.0, SamSPECTRAL-1.56.0, SC3-1.30.0, ScaledMatrix-1.10.0, SCANVIS-1.16.0, scater-1.30.1, scattermore-1.2, scDblFinder-1.16.0, scistreer-1.2.0, scran-1.30.2, scrime-1.3.5, scuttle-1.12.0, SeqArray-1.42.0, seqLogo-1.68.0, SeqVarTools-1.40.0, seriation-1.5.4, Seurat-5.0.1, SeuratObject-5.0.1, shinyBS-0.61.1, shinydashboardPlus-2.0.3, shinyFiles-0.9.3, shinyhelper-0.3.2, shinypanel-0.1.5, shinyWidgets-0.8.1, ShortRead-1.60.0, siggenes-1.76.0, Signac-1.12.0, simplifyEnrichment-1.12.0, SingleCellExperiment-1.24.0, SingleR-2.4.1, sitmo-2.0.2, slingshot-2.10.0, SMVar-1.3.4, SNPRelate-1.36.0, snpStats-1.52.0, SparseArray-1.2.3, sparseMatrixStats-1.14.0, sparsesvd-0.2-2, SpatialExperiment-1.12.0, Spectra-1.12.0, SPIA-2.54.0, splancs-2.01-44, SPOTlight-1.6.7, stageR-1.24.0, struct-1.14.0, structToolbox-1.14.0, SummarizedExperiment-1.32.0, susieR-0.12.35, sva-3.50.0, TailRank-3.2.2, TFBSTools-1.40.0, TFMPvalue-0.0.9, tkWidgets-1.80.0, TrajectoryUtils-1.10.0, treeio-1.26.0, TreeSummarizedExperiment-2.10.0, TSP-1.2-4, TxDb.Hsapiens.UCSC.hg19.knownGene-3.2.2, TxDb.Mmusculus.UCSC.mm10.knownGene-3.10.0, tximport-1.30.0, UCell-2.6.2, uwot-0.1.16, variancePartition-1.32.2, VariantAnnotation-1.48.1, venn-1.12, vsn-3.70.0, waiter-0.2.5, wateRmelon-2.8.0, WGCNA-1.72-5, widgetTools-1.80.0, Wrench-1.20.0, xcms-4.0.2, XVector-0.42.0, zCompositions-1.5.0-1, zellkonverter-1.12.1, zlibbioc-1.48.0

"},{"location":"available_software/detail/R-bundle-Bioconductor/#r-bundle-bioconductor316-foss-2022b-r-422","title":"R-bundle-Bioconductor/3.16-foss-2022b-R-4.2.2","text":"

This is a list of extensions included in the module:

affxparser-1.70.0, affy-1.76.0, affycoretools-1.70.0, affyio-1.68.0, AgiMicroRna-2.48.0, agricolae-1.3-5, ALDEx2-1.30.0, ALL-1.40.0, ANCOMBC-2.0.2, annaffy-1.70.0, annotate-1.76.0, AnnotationDbi-1.60.2, AnnotationFilter-1.22.0, AnnotationForge-1.40.1, AnnotationHub-3.6.0, anytime-0.3.9, aroma.affymetrix-3.2.1, aroma.apd-0.6.1, aroma.core-3.3.0, aroma.light-3.28.0, ash-1.0-15, ATACseqQC-1.22.0, AUCell-1.20.2, aws.s3-0.3.21, aws.signature-0.6.0, babelgene-22.9, ballgown-2.30.0, basilisk-1.10.2, basilisk.utils-1.10.0, batchelor-1.14.1, baySeq-2.31.0, beachmat-2.14.0, Biobase-2.58.0, BiocBaseUtils-1.0.0, BiocFileCache-2.6.1, BiocGenerics-0.44.0, BiocIO-1.8.0, BiocManager-1.30.20, BiocNeighbors-1.16.0, BiocParallel-1.32.5, BiocSingular-1.14.0, BiocStyle-2.26.0, BiocVersion-3.16.0, biomaRt-2.54.0, biomformat-1.26.0, Biostrings-2.66.0, biovizBase-1.46.0, blme-1.0-5, bluster-1.8.0, bookdown-0.33, BSgenome-1.66.3, BSgenome.Cfamiliaris.UCSC.canFam3-1.4.0, BSgenome.Hsapiens.UCSC.hg19-1.4.3, BSgenome.Hsapiens.UCSC.hg38-1.4.5, BSgenome.Mmusculus.UCSC.mm10-1.4.3, bsseq-1.34.0, bumphunter-1.40.0, ca-0.71.1, CAGEr-2.4.0, CAMERA-1.54.0, Category-2.64.0, ccdata-1.24.0, ccmap-1.24.0, CGHbase-1.58.0, CGHcall-2.60.0, ChIPpeakAnno-3.32.0, chromVAR-1.20.2, clusterProfiler-4.6.2, CNEr-1.34.0, coloc-5.1.0.1, colorRamps-2.3.1, ComplexHeatmap-2.14.0, ConsensusClusterPlus-1.62.0, conumee-1.32.0, crossmeta-1.24.0, cummeRbund-2.40.0, cytolib-2.10.1, CytoML-2.10.0, dada2-1.26.0, ddPCRclust-1.18.0, DECIPHER-2.26.0, DeconRNASeq-1.40.0, decontam-1.18.0, decoupleR-2.4.0, DEGseq-1.52.0, DelayedArray-0.24.0, DelayedMatrixStats-1.20.0, densEstBayes-1.0-2.1, derfinder-1.32.0, derfinderHelper-1.32.0, DESeq2-1.38.3, diffcyt-1.18.0, dir.expiry-1.6.0, DirichletMultinomial-1.40.0, DNABarcodes-1.28.0, DNAcopy-1.72.3, DO.db-2.9, docopt-0.7.1, DOSE-3.24.2, dqrng-0.3.0, DRIMSeq-1.26.0, DropletUtils-1.18.1, DSS-2.46.0, dupRadar-1.28.0, DynDoc-1.76.0, EBImage-4.40.0, edgeR-3.40.2, egg-0.4.5, emmeans-1.8.5, enrichplot-1.18.3, EnsDb.Hsapiens.v75-2.99.0, EnsDb.Hsapiens.v79-2.99.0, EnsDb.Hsapiens.v86-2.99.0, ensembldb-2.22.0, escape-1.8.0, estimability-1.4.1, ExperimentHub-2.6.0, extraDistr-1.9.1, factoextra-1.0.7, fda-6.0.5, FDb.InfiniumMethylation.hg19-2.2.0, fds-1.8, feature-1.2.15, fgsea-1.24.0, filelock-1.0.2, flowAI-1.28.0, flowClean-1.36.0, flowClust-3.36.0, flowCore-2.10.0, flowDensity-1.32.0, flowFP-1.56.3, flowMerge-2.46.0, flowPeaks-1.44.0, FlowSOM-2.6.0, FlowSorted.Blood.EPIC-2.2.0, FlowSorted.CordBloodCombined.450k-1.14.0, flowStats-4.10.0, flowViz-1.62.0, flowWorkspace-4.10.1, FRASER-1.10.2, fresh-0.2.0, gcrma-2.70.0, gdsfmt-1.34.0, genefilter-1.80.3, geneLenDataBase-1.34.0, geneplotter-1.76.0, GENESIS-2.28.0, GENIE3-1.20.0, GenomeInfoDb-1.34.9, GenomeInfoDbData-1.2.9, GenomicAlignments-1.34.1, GenomicFeatures-1.50.4, GenomicFiles-1.34.0, GenomicRanges-1.50.2, GenomicScores-2.10.0, GEOmap-2.5-0, GEOquery-2.66.0, ggbio-1.46.0, ggcyto-1.26.4, ggdendro-0.1.23, ggnewscale-0.4.8, ggpointdensity-0.1.0, ggrastr-1.0.1, ggseqlogo-0.1, ggthemes-4.2.4, ggtree-3.6.2, GLAD-2.62.0, Glimma-2.8.0, GlobalAncova-4.16.0, globaltest-5.52.0, GO.db-3.16.0, GOSemSim-2.24.0, goseq-1.50.0, GOstats-2.64.0, graph-1.76.0, graphite-1.44.0, GSEABase-1.60.0, gsmoothr-0.1.7, gson-0.1.0, GSVA-1.46.0, Gviz-1.42.1, GWASExactHW-1.01, GWASTools-1.44.0, HDF5Array-1.26.0, HDO.db-0.99.1, hdrcde-3.4, heatmaply-1.4.2, hgu133plus2.db-3.13.0, HiCBricks-1.16.0, HiCcompare-1.20.0, HMMcopy-1.40.0, Homo.sapiens-1.3.1, IHW-1.26.0, IlluminaHumanMethylation450kanno.ilmn12.hg19-0.6.1, IlluminaHumanMethylation450kmanifest-0.4.0, IlluminaHumanMethylationEPICanno.ilm10b2.hg19-0.6.0, IlluminaHumanMethylationEPICanno.ilm10b4.hg19-0.6.0, IlluminaHumanMethylationEPICmanifest-0.3.0, illuminaio-0.40.0, impute-1.72.3, InteractionSet-1.26.1, interactiveDisplayBase-1.36.0, intervals-0.15.4, IRanges-2.32.0, isva-1.9, JASPAR2020-0.99.10, KEGGgraph-1.58.3, KEGGREST-1.38.0, LEA-3.10.2, limma-3.54.2, log4r-0.4.3, lpsymphony-1.26.3, lsa-0.73.3, lumi-2.50.0, M3Drop-1.24.0, marray-1.76.0, maSigPro-1.70.0, MassSpecWavelet-1.64.1, MatrixGenerics-1.10.0, MBA-0.1-0, MEDIPS-1.50.0, metagenomeSeq-1.40.0, metaMA-3.1.3, metap-1.8, metapod-1.6.0, MethylSeekR-1.38.0, methylumi-2.44.0, Mfuzz-2.58.0, mia-1.6.0, minfi-1.44.0, missMethyl-1.32.0, mixOmics-6.22.0, mixsqp-0.3-48, MLInterfaces-1.78.0, MotifDb-1.40.0, motifmatchr-1.20.0, motifStack-1.42.0, MsCoreUtils-1.10.0, MsFeatures-1.6.0, msigdbr-7.5.1, MSnbase-2.24.2, MSstats-4.6.5, MSstatsConvert-1.8.3, MSstatsLiP-1.4.1, MSstatsPTM-2.0.3, MSstatsTMT-2.6.1, MultiAssayExperiment-1.24.0, MultiDataSet-1.26.0, multtest-2.54.0, muscat-1.12.1, mutoss-0.1-13, mzID-1.36.0, mzR-2.32.0, NADA-1.6-1.1, ncdfFlow-2.44.0, NMF-0.25, NOISeq-2.42.0, numbat-1.2.2, oligo-1.62.2, oligoClasses-1.60.0, ontologyIndex-2.10, oompaBase-3.2.9, oompaData-3.1.3, openCyto-2.10.1, org.Hs.eg.db-3.16.0, org.Mm.eg.db-3.16.0, org.Rn.eg.db-3.16.0, OrganismDbi-1.40.0, OUTRIDER-1.16.3, pathview-1.38.0, pcaMethods-1.90.0, perm-1.0-0.2, PFAM.db-3.16.0, phyloseq-1.42.0, pmp-1.10.0, polyester-1.34.0, poweRlaw-0.70.6, preprocessCore-1.60.2, pRoloc-1.38.2, pRolocdata-1.36.0, pRolocGUI-2.8.0, ProtGenerics-1.30.0, PRROC-1.3.1, PSCBS-0.66.0, PureCN-2.4.0, qap-0.1-2, QDNAseq-1.34.0, qlcMatrix-0.9.7, qqconf-1.3.1, quantsmooth-1.64.0, qvalue-2.30.0, R.devices-2.17.1, R.filesets-2.15.0, R.huge-0.9.0, rainbow-3.7, randomcoloR-1.1.0.1, rARPACK-0.11-0, RBGL-1.74.0, RcisTarget-1.18.2, RcppAnnoy-0.0.20, RcppHNSW-0.4.1, RcppML-0.3.7, RcppZiggurat-0.1.6, reactome.db-1.82.0, ReactomePA-1.42.0, regioneR-1.30.0, reldist-1.7-2, remaCor-0.0.11, Repitools-1.44.0, ReportingTools-2.38.0, ResidualMatrix-1.8.0, restfulr-0.0.15, Rfast-2.0.7, RFOC-3.4-6, rGADEM-2.46.0, Rgraphviz-2.42.0, rhdf5-2.42.0, rhdf5filters-1.10.0, Rhdf5lib-1.20.0, Rhtslib-2.0.0, Ringo-1.62.0, RNASeqPower-1.38.0, RnBeads-2.16.0, RnBeads.hg19-1.30.0, RnBeads.hg38-1.30.0, RnBeads.mm10-2.6.0, RnBeads.mm9-1.30.0, RnBeads.rn5-1.30.0, ROC-1.74.0, rols-2.26.0, ROntoTools-2.26.0, ropls-1.30.0, RPMG-2.2-3, RProtoBufLib-2.10.0, Rsamtools-2.14.0, RSEIS-4.1-4, Rsubread-2.12.3, rsvd-1.0.5, rtracklayer-1.58.0, Rwave-2.6-5, S4Vectors-0.36.2, samr-3.0, SamSPECTRAL-1.52.0, SC3-1.26.2, ScaledMatrix-1.6.0, SCANVIS-1.12.0, scater-1.26.1, scattermore-0.8, scDblFinder-1.12.0, scistreer-1.1.0, scran-1.26.2, scrime-1.3.5, scuttle-1.8.4, SeqArray-1.38.0, seqLogo-1.64.0, SeqVarTools-1.36.0, seriation-1.4.2, Seurat-4.3.0, SeuratObject-4.1.3, shinyBS-0.61.1, shinydashboardPlus-2.0.3, shinyFiles-0.9.3, shinyhelper-0.3.2, shinypanel-0.1.5, shinyWidgets-0.7.6, ShortRead-1.56.1, siggenes-1.72.0, Signac-1.9.0, simplifyEnrichment-1.8.0, SingleCellExperiment-1.20.0, SingleR-2.0.0, sitmo-2.0.2, slingshot-2.6.0, SMVar-1.3.4, SNPRelate-1.32.2, snpStats-1.48.0, sparseMatrixStats-1.10.0, sparsesvd-0.2-2, SpatialExperiment-1.8.1, SPIA-2.50.0, splancs-2.01-43, SPOTlight-1.2.0, stageR-1.20.0, struct-1.10.0, structToolbox-1.10.1, SummarizedExperiment-1.28.0, susieR-0.12.35, sva-3.46.0, TailRank-3.2.2, TFBSTools-1.36.0, TFMPvalue-0.0.9, tkWidgets-1.76.0, TrajectoryUtils-1.6.0, treeio-1.22.0, TreeSummarizedExperiment-2.6.0, TSP-1.2-3, TxDb.Hsapiens.UCSC.hg19.knownGene-3.2.2, TxDb.Mmusculus.UCSC.mm10.knownGene-3.10.0, tximport-1.26.1, UCell-2.2.0, uwot-0.1.14, variancePartition-1.28.7, VariantAnnotation-1.44.1, venn-1.11, vsn-3.66.0, waiter-0.2.5, wateRmelon-2.4.0, WGCNA-1.72-1, widgetTools-1.76.0, Wrench-1.16.0, xcms-3.20.0, XVector-0.38.0, zCompositions-1.4.0-1, zellkonverter-1.8.0, zlibbioc-1.44.0

"},{"location":"available_software/detail/R-bundle-CRAN/","title":"R-bundle-CRAN","text":"

Bundle of R packages from CRAN

https://www.r-project.org/

"},{"location":"available_software/detail/R-bundle-CRAN/#available-modules","title":"Available modules","text":"

The overview below shows which R-bundle-CRAN installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R-bundle-CRAN, load one of these modules using a module load command like:

module load R-bundle-CRAN/2024.06-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R-bundle-CRAN/2024.06-foss-2023b x x x x x x x x x R-bundle-CRAN/2023.12-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/R-bundle-CRAN/#r-bundle-cran202406-foss-2023b","title":"R-bundle-CRAN/2024.06-foss-2023b","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.1, abe-3.0.1, abind-1.4-5, acepack-1.4.2, adabag-5.0, ade4-1.7-22, ADGofTest-0.3, admisc-0.35, aggregation-1.0.1, AICcmodavg-2.3-3, akima-0.6-3.4, alabama-2023.1.0, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-2.2.1, animation-2.7, aod-1.3.3, apcluster-1.4.13, ape-5.8, aplot-0.2.3, argparse-2.2.3, aricode-1.0.3, arm-1.14-4, arrayhelpers-1.1-0, asnipe-1.1.17, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-4, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-11, aws-2.5-5, awsMethods-1.1-1, backports-1.5.0, bacr-1.0.1, bartMachine-1.3.4.1, bartMachineJARs-1.2.1, base64-2.0.1, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-6, BayesPen-1.0, bayesplot-1.11.1, bayestestR-0.14.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25.1, BCEE-1.3.2, BDgraph-2.72, bdsmatrix-1.3-7, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.5, betareg-3.1-4, BH-1.84.0-0, BiasedUrn-2.0.12, bibtex-0.5.1, BIEN-1.2.6, bigD-0.2.0, BIGL-1.9.1, bigmemory-4.6.4, bigmemory.sri-0.1.8, bindr-0.1.1, bindrcpp-0.2.3, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-5-2, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.5-5, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.9.4, bold-1.3.0, boot-1.3-30, bootstrap-2019.6, Boruta-8.0.0, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brms-2.21.0, Brobdingnag-1.2-9, broom-1.0.6, broom.helpers-1.15.0, broom.mixed-0.2.9.5, bst-0.3-24, Cairo-1.6-2, calibrate-1.7.7, car-3.1-2, carData-3.0-5, caret-6.0-94, catlearn-1.0, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.3.0, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.3.1, chemometrics-1.4.4, chk-0.9.1, chkptstanr-0.1.1, chron-2.3-61, circlize-0.4.16, circular-0.5-0, class-7.3-22, classInt-0.4-10, cld2-1.2.4, clisymbols-1.2.0, clock-0.7.0, clue-0.3-65, cluster-2.1.6, clusterGeneration-1.3.8, clusterRepro-0.9, clustree-0.5.1, clValid-0.7, cmna-1.0.5, cmprsk-2.2-12, cNORM-3.0.4, cobalt-4.5.5, cobs-1.3-8, coda-0.19-4.1, codetools-0.2-20, coin-1.4-3, collapse-2.0.14, colorspace-2.1-0, colourpicker-1.3.0, combinat-0.0-8, ComICS-1.0.4, ComplexUpset-1.3.3, compositions-2.0-8, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, ConsRank-2.1.4, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-3, corpcor-1.6.10, corrplot-0.92, covr-3.6.4, CovSel-1.2.1, covsim-1.1.0, cowplot-1.1.3, coxed-0.3.3, coxme-2.2-20, crfsuite-0.4.2, crosstalk-1.2.1, crul-1.4.2, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.1.0, cubelyr-1.0.2, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-13, d3Network-0.5.2.1, dagitty-0.3-4, data.table-1.15.4, data.tree-1.1.0, DataCombine-0.2.21, datawizard-0.12.2, date-1.2-42, dbarts-0.9-28, DBI-1.2.3, dbplyr-2.5.0, dbscan-1.1-12, dcurver-0.9.2, ddalpha-1.3.15, deal-1.2-42, debugme-1.2.0, deldir-2.0-4, dendextend-1.17.1, DEoptim-2.2-8, DEoptimR-1.1-3, DepthProc-2.1.5, Deriv-4.1.3, DescTools-0.99.54, deSolve-1.40, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.11, DiceKriging-1.6.0, dichromat-2.0-0.1, dimRed-0.2.6, diptest-0.77-1, DiscriMiner-0.1-29, dismo-1.3-14, distillery-1.2-1, distr-2.9.3, distrEx-2.9.2, distributional-0.4.0, DistributionUtils-0.6-1, diveRsity-1.9.90, dlm-1.1-6, DMCfun-3.5.4, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.1-1, downloader-0.4, dplyr-1.1.4, dr-3.0.10, dreamerr-1.4.0, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.33, dtangle-2.0.9, dtplyr-1.3.1, DTRreg-2.2, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-14, earth-5.3.3, EasyABC-1.5.2, ECOSolveR-0.5.5, ellipse-0.5.0, elliptic-1.4-0, emdbook-1.3.13, emmeans-1.10.2, emoa-0.5-2, emulator-1.2-24, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.8.1, epitools-0.5-10.1, ergm-4.6.0, ergm.count-4.1.2, ergm.multi-0.2.1, estimability-1.5.1, EValue-4.1.3, evd-2.3-7, Exact-3.2, expm-0.999-9, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-4, FactoMineR-2.11, FactorCopula-0.9.3, fail-1.3, farver-2.1.2, fastcluster-1.2.6, fastDummies-1.7.3, fasterize-1.0.5, fastICA-1.2-4, fastmatch-1.1-4, fdrtool-1.2.17, feather-0.3.5, ff-4.0.12, fftw-1.0-8, fftwtools-0.9-11, fields-15.2, filehash-2.4-5, finalfit-1.0.7, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-11, fixest-0.12.1, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-2, flexmix-2.3-19, flextable-0.9.6, fma-2.5, FME-1.3.6.3, fmri-1.9.12, FNN-1.1.4, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.23.0, foreign-0.8-86, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-12, fpp-0.5, fracdiff-1.5-3, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.33.2, future.apply-1.11.2, gam-1.22-3, gamlss-5.4-22, gamlss.data-6.0-6, gamlss.dist-6.1-1, gamlss.tr-5.1-9, gamm4-0.2-6, gap-1.5-3, gap.datasets-0.0.6, gapfill-0.9.6-1, gargle-1.5.2, gaussquad-1.0-3, gbm-2.1.9, gbRd-0.4.12, gclus-1.3.2, gdalUtils-2.0.3.2, gdata-3.0.0, gdistance-1.6.4, gdtools-0.3.7, gee-4.13-27, geeM-0.10.1, geepack-1.3.11, geex-1.1.1, geiger-2.0.11, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.14, geojsonsf-2.0.3, geometries-0.2.4, geometry-0.4.7, getopt-1.20.4, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.2.1, ggbeeswarm-0.7.2, ggdag-0.2.12, ggdist-3.3.2, ggExtra-0.10.1, ggfan-0.1.3, ggforce-0.4.2, ggformula-0.12.0, ggfun-0.1.5, ggh4x-0.2.8, ggnetwork-0.5.13, ggplot2-3.5.1, ggplotify-0.1.2, ggpubr-0.6.0, ggraph-2.2.1, ggrepel-0.9.5, ggridges-0.5.6, ggsci-3.2.0, ggsignif-0.6.4, ggstance-0.3.7, ggstats-0.6.0, ggvenn-0.1.10, ggvis-0.4.9, GillespieSSA-0.6.2, git2r-0.33.0, GJRM-0.2-6.5, glasso-1.11, gld-2.6.6, gllvm-1.4.3, glmmML-1.1.6, glmmTMB-1.1.9, glmnet-4.1-8, GlobalOptions-0.1.2, globals-0.16.3, gmm-1.8, gmodels-2.19.1, gmp-0.7-4, gnumeric-0.7-10, goftest-1.2-3, gomms-1.0, googledrive-2.1.1, googlesheets4-1.1.1, gower-1.0.1, GPArotation-2024.3-1, gplots-3.1.3.1, graphlayouts-1.1.1, grf-2.3.2, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.3-1, grpreg-3.4.0, GSA-1.03.3, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.10.1, gtable-0.3.5, gtools-3.9.5, gtsummary-1.7.2, GUTS-1.2.5, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.44.0.3, hal9001-0.4.6, haldensify-0.2.3, hardhat-1.4.0, harmony-1.2.0, hash-2.2.6.3, haven-2.5.4, hdf5r-1.3.10, hdm-0.3.2, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.3, HGNChelper-0.8.14, HiddenMarkov-1.8-13, Hmisc-5.1-3, hms-1.1.3, Hmsc-3.0-13, htmlTable-2.4.2, httpcode-0.3.0, huge-1.3.5, hunspell-3.0.3, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.21, idr-1.3, ids-1.0.1, ie2misc-0.9.1, igraph-2.0.3, image.binarization-0.1.3, imager-1.0.2, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.5, infotheo-1.2.0.1, inline-0.3.19, insight-0.20.3, intergraph-2.0-4, interp-1.1-6, interpretR-0.2.5, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-21, isoband-0.2.7, ISOcodes-2024.02.12, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-4, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-6, jpeg-0.1-10, jsonify-1.2.2, jstable-1.2.6, juicyjuice-0.1.0, kde1d-1.0.7, kedd-1.0.4, kernlab-0.9-32, KernSmooth-2.23-24, kinship2-1.9.6.1, klaR-1.7-3, KODAMA-2.4, kohonen-3.0.12, ks-1.14.2, labdsv-2.1-0, labeling-0.4.3, labelled-2.13.0, laeken-0.5.3, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, lattice-0.22-6, latticeExtra-0.6-30, lava-1.8.0, lavaan-0.6-18, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.5.2, ldbounds-2.0.2, leafem-0.2.3, leaflet-2.2.2, leaflet.providers-2.0.0, leafsync-0.1.0, leaps-3.2, LearnBayes-2.15.1, leiden-0.4.3.1, lhs-1.1.6, libcoin-1.0-10, limSolve-1.5.7.1, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.1, lme4-1.1-35.4, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-3.0, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.10, logcondens-2.1.8, logger-0.3.0, logistf-1.26.0, logspline-2.1.22, longitudinal-1.1.13, longmemo-1.1-2, loo-2.7.0, lpSolve-5.6.20, lpSolveAPI-5.5.2.0-17.11, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.3, lwgeom-0.2-14, magic-1.6-1, magick-2.8.3, MALDIquant-1.22.2, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.2, maptools-1.1-8, markdown-1.13, MASS-7.3-61, Matching-4.10-14, MatchIt-4.5.5, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.7-0, matrixcalc-1.0-6, MatrixModels-0.5-3, matrixStats-1.3.0, maxLik-1.5-2.1, maxlike-0.1-11, maxnet-0.1.4, mboost-2.9-10, mclogit-0.9.6, mclust-6.1.1, mcmc-0.9-8, MCMCpack-1.7-0, mcmcse-1.5-0, mda-0.5-4, medflex-0.6-10, mediation-4.5.0, memisc-0.99.31.7, memuse-4.2-3, MESS-0.5.12, metadat-1.2-0, metafor-4.6-0, MetaUtility-2.1.2, mets-1.3.4, mgcv-1.9-1, mgsub-1.7.3, mhsmm-0.4.21, mi-1.1, mice-3.16.0, miceadds-3.17-44, microbenchmark-1.4.10, MIIVsem-0.5.8, minerva-1.5.10, minpack.lm-1.2-4, minqa-1.2.7, minty-0.0.1, mirt-1.41, misc3d-0.9-1, miscTools-0.6-28, missForest-1.5, missMDA-1.19, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-5, mlegp-3.1.9, MLmetrics-1.1.3, mlogit-1.1-1, mlr-2.19.2, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.11, modeltools-0.2-23, momentfit-0.5, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.4.0, mpath-0.4-2.25, mRMRe-2.1.2.1, msm-1.7.1, mstate-0.3.2, multcomp-1.4-25, multcompView-0.1-10, multicool-1.0.1, multipol-1.0-9, multitaper-1.0-17, munsell-0.5.1, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.2-5, nabor-0.5.0, naniar-1.1.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.22, NCmisc-1.2.0, network-1.18.2, networkDynamic-0.11.4, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-14, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.5, nlme-3.1-165, nloptr-2.1.0, NLP-0.2-1, nlsem-0.8-1, nnet-7.3-19, nnls-1.5, nonnest2-0.5-7, nor1mix-1.3-3, norm-1.0-11.1, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.8-2, OceanView-1.0.7, oddsratio-2.0.1, officer-0.6.6, openair-2.18-2, OpenMx-2.21.11, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2023-10.21, optmatch-0.10.7, optparse-1.7.5, ordinal-2023.12-4, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.3.3, outliers-0.15, packrat-0.9.2, pacman-0.5.1, pammtools-0.5.93, pamr-1.56.2, pan-1.9, parallelDist-0.2.6, parallelly-1.37.1, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-15, partykit-1.2-20, pastecs-1.4.2, patchwork-1.2.0, pbapply-1.7-2, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-4, pdp-0.8.1, PearsonDS-1.3.1, pec-2023.04.12, penalized-0.9-52, penfa-0.1.1, peperr-1.5, performance-0.12.2, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.12, phytools-2.3-0, pim-2.0.2, pinfsc50-1.3.0, pixmap-0.4-13, pkgmaker-0.32.10, PKI-0.1-14, plogr-0.2.0, plot3D-1.4.1, plot3Drgl-1.0.4, plotly-4.10.4, plotmo-3.6.3, plotrix-3.8-4, pls-2.8-3, plyr-1.8.9, PMA-1.2-3, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.25, Polychrome-1.5.1, polyclip-1.10-6, polycor-0.8-1, polynom-1.4-1, posterior-1.5.0, ppcor-1.1, prabclus-2.3-3, pracma-2.4.4, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, princurve-2.1.6, pROC-1.18.5, prodlim-2023.08.28, profileModel-0.6.1, proftools-0.99-3, progress-1.2.3, progressr-0.14.0, projpred-2.8.0, proto-1.0.0, proxy-0.4-27, proxyC-0.4.1, pryr-0.1.6, pscl-1.5.9, pspline-1.0-20, psych-2.4.3, Publish-2023.01.17, pulsar-0.3.11, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.8, qqman-0.1.9, qrnn-2.1.1, quadprog-1.5-8, quanteda-4.0.2, quantmod-0.4.26, quantreg-5.98, questionr-0.7.8, QuickJSR-1.2.2, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.26.0, R.rsp-0.46.0, R.utils-2.12.3, R2WinBUGS-2.1-22.1, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.3, randtoolbox-2.0.4, rangeModelMetadata-0.1.5, ranger-0.16.0, RANN-2.6.1, rapidjsonr-1.2.0, rARPACK-0.11-0, raster-3.6-26, rasterVis-0.51.6, ratelimitr-0.4.1, RBesT-1.7-3, rbibutils-2.2.16, rbison-1.0.0, Rborist-0.3-7, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, RColorBrewer-1.1-3, RcppArmadillo-0.12.8.4.0, RcppEigen-0.3.4.0.0, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.7, RcppTOML-0.2.2, RCurl-1.98-1.14, rda-1.2-1, Rdpack-2.6, rdrop2-0.8.2.1, reactable-0.4.4, reactR-0.5.0, readbitmap-0.1.5, reader-1.0.6, readODS-2.3.0, readr-2.1.5, readxl-1.4.3, rebird-1.3.0, recipes-1.0.10, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.5, relsurv-2.2-9, rematch-2.0.0, rentrez-1.2.3, renv-1.0.7, reprex-2.1.0, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.38.0, rex-1.2.1, rgbif-3.8.0, RGCCA-3.0.3, rgdal-1.6-7, rgeos-0.6-4, rgexf-0.16.2, rgl-1.3.1, Rglpk-0.6-5.1, rhandsontable-0.3.8, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.8, RInside-0.2.18, rio-1.1.1, riskRegression-2023.12.21, ritis-1.0.0, RItools-0.3-4, rJava-1.0-11, rjson-0.2.21, RJSONIO-1.3-1.9, rle-0.9.2, rlecuyer-0.3-8, rlemon-0.2.1, rlist-0.4.6.2, rmeta-3.0, Rmpfr-0.9-5, rms-6.8-1, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.7.0, robustbase-0.99-2, ROCR-1.0-11, ROI-1.0-1, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.4, roptim-0.1.6, rotl-3.1.0, rpact-4.0.0, rpart-4.1.23, rpf-1.0.14, RPMM-1.25, RPostgreSQL-0.7-6, rrcov-1.7-5, rredlist-0.7.1, rsample-1.2.1, rsconnect-1.3.1, Rserve-1.8-13, RSNNS-0.4-17, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.7, Rssa-1.0.5, rstan-2.32.6, rstantools-2.4.0, rstatix-0.7.2, rtdists-0.11-5, Rtsne-0.17, Rttf2pt1-1.3.12, RUnit-0.4.33, ruv-0.9.7.1, rvertnet-0.8.4, rvest-1.0.4, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.6, sampling-2.10, sandwich-3.1-0, SBdecomp-1.2, scales-1.3.0, scam-1.2-17, scatterpie-0.2.3, scatterplot3d-0.3-44, scs-3.2.4, sctransform-0.4.1, SDMTools-1.1-221.2, seewave-2.2.3, segmented-2.1-0, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-36, servr-0.30, setRNG-2024.2-1, sf-1.0-16, sfheaders-0.4.4, sfsmisc-1.1-18, shadowtext-0.1.3, shape-1.4.6.1, shapefiles-0.7.2, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-1.8-0, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.1, sm-2.2-6.0, smoof-1.6.0.3, smoother-1.3, sn-2.1.1, sna-2.7-2, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.1, snowfall-1.84-6.3, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sp-2.1-4, spaa-0.2.2, spam-2.10-0, spaMM-4.5.0, SparseM-1.83, SPAtest-3.1.2, spatial-7.3-17, spatstat-3.0-8, spatstat.core-2.4-4, spatstat.data-3.1-2, spatstat.explore-3.2-7, spatstat.geom-3.2-9, spatstat.linnet-3.1-5, spatstat.model-3.2-11, spatstat.random-3.2-3, spatstat.sparse-3.1-0, spatstat.utils-3.0-5, spData-2.3.1, spdep-1.3-5, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.3, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.32.9, stargazer-5.2.3, stars-0.6-5, startupmsg-0.9.6.1, StatMatch-1.4.2, statmod-1.5.0, statnet-2019.6, statnet.common-4.9.0, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.12, stringmagic-1.1.2, strucchange-1.5-3, styler-1.10.3, subplex-1.8, SuperLearner-2.0-29, SuppDists-1.1-9.7, survey-4.4-2, survival-3.7-0, survivalROC-1.0.3.1, svd-0.5.5, svglite-2.1.3, svUnit-1.0.6, swagger-5.17.14, symmoments-1.2.1, tableone-0.13.2, tabletools-0.1.0, tau-0.0-25, taxize-0.9.100, tcltk2-1.2-11, tclust-2.0-4, TeachingDemos-2.13, tensor-1.5, tensorA-0.36.2.1, tergm-4.2.0, terra-1.7-78, testit-0.13, textcat-1.0-8, textplot-0.2.2, TFisher-0.2.0, TH.data-1.1-2, threejs-0.3.3, tictoc-1.2.1, tidybayes-3.0.6, tidygraph-1.3.1, tidyr-1.3.1, tidyselect-1.2.1, tidytext-0.4.2, tidytree-0.4.6, tidyverse-2.0.0, tiff-0.1-12, timechange-0.3.0, timeDate-4032.109, timereg-2.0.5, tkrplot-0.0-27, tm-0.7-13, tmap-3.3-4, tmaptools-3.1-1, TMB-1.9.12, tmle-2.0.1.1, tmvnsim-1.0-2, tmvtnorm-1.6, tokenizers-0.3.0, topicmodels-0.2-16, TraMineR-2.2-10, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-9, trust-0.1-8, tseries-0.10-56, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.4, tuneR-1.4.7, twang-2.6, tweedie-2.3.5, tweenr-2.0.3, tzdb-0.4.0, ucminf-1.2.1, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-1.0.0, uniqueAtomMat-0.1-3-2, units-0.8-5, unmarked-1.4.1, UpSetR-1.4.0, urca-1.3-4, urltools-1.7.3, uroot-2.1-3, uuid-1.2-0, V8-4.4.2, varhandle-2.0.6, vcd-1.4-12, vcfR-1.15.0, vegan-2.6-6.1, VennDiagram-1.7.3, VGAM-1.1-11, VIM-6.2.2, VineCopula-2.5.0, vioplot-0.4.0, vipor-0.4.7, viridis-0.6.5, viridisLite-0.4.2, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.5, VSURF-1.2.0, warp-0.2.1, waveslim-1.8.5, wdm-0.2.4, webshot-0.5.5, webutils-1.2.0, weights-1.0.4, WeightSVM-1.7-13, wellknown-0.7.4, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.7.1, wikitaxa-0.4.0, wk-0.9.1, word2vec-0.4.0, wordcloud-2.6, worrms-0.4.3, writexl-1.5.0, WriteXLS-6.6.0, XBRL-0.99.19.1, xgboost-1.7.7.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.16.1, xts-0.14.0, yaImpute-1.0-34, yulab.utils-0.1.4, zeallot-0.1.0, zoo-1.8-12

"},{"location":"available_software/detail/R-bundle-CRAN/#r-bundle-cran202312-foss-2023a","title":"R-bundle-CRAN/2023.12-foss-2023a","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.0, abe-3.0.1, abind-1.4-5, acepack-1.4.2, adabag-5.0, ade4-1.7-22, ADGofTest-0.3, admisc-0.34, aggregation-1.0.1, AICcmodavg-2.3-3, akima-0.6-3.4, alabama-2023.1.0, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-1.0.1, animation-2.7, aod-1.3.2, apcluster-1.4.11, ape-5.7-1, aplot-0.2.2, argparse-2.2.2, aricode-1.0.3, arm-1.13-1, arrayhelpers-1.1-0, asnipe-1.1.17, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-4, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-11, aws-2.5-3, awsMethods-1.1-1, backports-1.4.1, bacr-1.0.1, bartMachine-1.3.4.1, bartMachineJARs-1.2.1, base64-2.0.1, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-6, BayesPen-1.0, bayesplot-1.10.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25.1, BCEE-1.3.2, BDgraph-2.72, bdsmatrix-1.3-6, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.0, betareg-3.1-4, BH-1.81.0-1, BiasedUrn-2.0.11, bibtex-0.5.1, BIEN-1.2.6, bigD-0.2.0, BIGL-1.8.0, bigmemory-4.6.1, bigmemory.sri-0.1.6, bindr-0.1.1, bindrcpp-0.2.2, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-4, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.5-2, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.9.1, bold-1.3.0, boot-1.3-28.1, bootstrap-2019.6, Boruta-8.0.0, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brms-2.20.4, Brobdingnag-1.2-9, broom-1.0.5, broom.helpers-1.14.0, broom.mixed-0.2.9.4, bst-0.3-24, Cairo-1.6-2, calibrate-1.7.7, car-3.1-2, carData-3.0-5, caret-6.0-94, catlearn-1.0, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.3.0, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.3.1, chemometrics-1.4.4, chk-0.9.1, chkptstanr-0.1.1, chron-2.3-61, circlize-0.4.15, circular-0.5-0, class-7.3-22, classInt-0.4-10, cld2-1.2.4, clisymbols-1.2.0, clock-0.7.0, clue-0.3-65, cluster-2.1.6, clusterGeneration-1.3.8, clusterRepro-0.9, clustree-0.5.1, clValid-0.7, cmprsk-2.2-11, cNORM-3.0.4, cobalt-4.5.2, cobs-1.3-5, coda-0.19-4, codetools-0.2-19, coin-1.4-3, collapse-2.0.7, colorspace-2.1-0, colourpicker-1.3.0, combinat-0.0-8, ComICS-1.0.4, ComplexUpset-1.3.3, compositions-2.0-6, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, ConsRank-2.1.3, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-3, corpcor-1.6.10, corrplot-0.92, covr-3.6.4, CovSel-1.2.1, covsim-1.0.0, cowplot-1.1.1, coxed-0.3.3, coxme-2.2-18.1, crfsuite-0.4.2, crosstalk-1.2.1, crul-1.4.0, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.1.0, cubelyr-1.0.2, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-11, d3Network-0.5.2.1, dagitty-0.3-4, data.table-1.14.10, data.tree-1.1.0, DataCombine-0.2.21, date-1.2-42, dbarts-0.9-25, DBI-1.1.3, dbplyr-2.4.0, dbscan-1.1-12, dcurver-0.9.2, ddalpha-1.3.13, deal-1.2-42, debugme-1.1.0, deldir-2.0-2, dendextend-1.17.1, DEoptim-2.2-8, DEoptimR-1.1-3, DepthProc-2.1.5, Deriv-4.1.3, DescTools-0.99.52, deSolve-1.40, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.10, DiceKriging-1.6.0, dichromat-2.0-0.1, dimRed-0.2.6, diptest-0.77-0, DiscriMiner-0.1-29, dismo-1.3-14, distillery-1.2-1, distr-2.9.2, distrEx-2.9.0, distributional-0.3.2, DistributionUtils-0.6-1, diveRsity-1.9.90, dlm-1.1-6, DMCfun-2.0.2, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.1-1, downloader-0.4, dplyr-1.1.4, dr-3.0.10, dreamerr-1.4.0, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.31, dtangle-2.0.9, dtplyr-1.3.1, DTRreg-2.0, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-14, earth-5.3.2, EasyABC-1.5.2, ECOSolveR-0.5.5, ellipse-0.5.0, elliptic-1.4-0, emdbook-1.3.13, emmeans-1.8.9, emoa-0.5-0.2, emulator-1.2-21, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.8.1, epitools-0.5-10.1, ergm-4.5.0, ergm.count-4.1.1, ergm.multi-0.2.0, estimability-1.4.1, EValue-4.1.3, evd-2.3-6.1, Exact-3.2, expm-0.999-8, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-3, FactoMineR-2.9, FactorCopula-0.9.3, fail-1.3, farver-2.1.1, fastcluster-1.2.3, fastDummies-1.7.3, fasterize-1.0.5, fastICA-1.2-4, fastmatch-1.1-4, fdrtool-1.2.17, feather-0.3.5, ff-4.0.9, fftw-1.0-7, fftwtools-0.9-11, fields-15.2, filehash-2.4-5, finalfit-1.0.7, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-11, fixest-0.11.2, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-1, flexmix-2.3-19, flextable-0.9.4, fma-2.5, FME-1.3.6.3, fmri-1.9.12, FNN-1.1.3.2, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.21.1, foreign-0.8-86, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-10, fpp-0.5, fracdiff-1.5-2, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.33.0, future.apply-1.11.0, gam-1.22-3, gamlss-5.4-20, gamlss.data-6.0-2, gamlss.dist-6.1-1, gamlss.tr-5.1-7, gamm4-0.2-6, gap-1.5-3, gap.datasets-0.0.6, gapfill-0.9.6-1, gargle-1.5.2, gaussquad-1.0-3, gbm-2.1.8.1, gbRd-0.4-11, gclus-1.3.2, gdalUtils-2.0.3.2, gdata-3.0.0, gdistance-1.6.4, gdtools-0.3.5, gee-4.13-26, geeM-0.10.1, geepack-1.3.9, geex-1.1.1, geiger-2.0.11, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.10.1, geojsonsf-2.0.3, geometries-0.2.3, geometry-0.4.7, getopt-1.20.4, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.2.0, ggbeeswarm-0.7.2, ggdag-0.2.10, ggdist-3.3.1, ggExtra-0.10.1, ggfan-0.1.3, ggforce-0.4.1, ggformula-0.12.0, ggfun-0.1.3, ggh4x-0.2.6, ggnetwork-0.5.12, ggplot2-3.4.4, ggplotify-0.1.2, ggpubr-0.6.0, ggraph-2.1.0, ggrepel-0.9.4, ggridges-0.5.4, ggsci-3.0.0, ggsignif-0.6.4, ggstance-0.3.6, ggstats-0.5.1, ggvenn-0.1.10, ggvis-0.4.8, GillespieSSA-0.6.2, git2r-0.33.0, GJRM-0.2-6.4, glasso-1.11, gld-2.6.6, gllvm-1.4.3, glmmML-1.1.6, glmmTMB-1.1.8, glmnet-4.1-8, GlobalOptions-0.1.2, globals-0.16.2, gmm-1.8, gmodels-2.18.1.1, gmp-0.7-3, gnumeric-0.7-10, goftest-1.2-3, gomms-1.0, googledrive-2.1.1, googlesheets4-1.1.1, gower-1.0.1, GPArotation-2023.11-1, gplots-3.1.3, graphlayouts-1.0.2, grf-2.3.1, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.3-1, grpreg-3.4.0, GSA-1.03.2, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.10.0, gtable-0.3.4, gtools-3.9.5, gtsummary-1.7.2, GUTS-1.2.5, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.42.0.2, hal9001-0.4.6, haldensify-0.2.3, hardhat-1.3.0, harmony-1.2.0, hash-2.2.6.3, haven-2.5.4, hdf5r-1.3.8, hdm-0.3.1, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.3, HGNChelper-0.8.1, HiddenMarkov-1.8-13, Hmisc-5.1-1, hms-1.1.3, Hmsc-3.0-13, htmlTable-2.4.2, httpcode-0.3.0, huge-1.3.5, hunspell-3.0.3, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.20, idr-1.3, ids-1.0.1, ie2misc-0.9.1, igraph-1.5.1, image.binarization-0.1.3, imager-0.45.2, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.5, infotheo-1.2.0.1, inline-0.3.19, intergraph-2.0-3, interp-1.1-5, interpretR-0.2.5, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-21, isoband-0.2.7, ISOcodes-2023.12.07, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-4, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-6, jpeg-0.1-10, jsonify-1.2.2, jstable-1.1.3, juicyjuice-0.1.0, kde1d-1.0.5, kedd-1.0.3, kernlab-0.9-32, KernSmooth-2.23-22, kinship2-1.9.6, klaR-1.7-2, KODAMA-2.4, kohonen-3.0.12, ks-1.14.1, labdsv-2.1-0, labeling-0.4.3, labelled-2.12.0, laeken-0.5.2, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, lattice-0.22-5, latticeExtra-0.6-30, lava-1.7.3, lavaan-0.6-16, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.4.2, ldbounds-2.0.2, leafem-0.2.3, leaflet-2.2.1, leaflet.providers-2.0.0, leafsync-0.1.0, leaps-3.1, LearnBayes-2.15.1, leiden-0.4.3.1, lhs-1.1.6, libcoin-1.0-10, limSolve-1.5.7, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.0, lme4-1.1-35.1, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-3.0, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.8, logcondens-2.1.8, logger-0.2.2, logistf-1.26.0, logspline-2.1.21, longitudinal-1.1.13, longmemo-1.1-2, loo-2.6.0, lpSolve-5.6.19, lpSolveAPI-5.5.2.0-17.11, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.3, lwgeom-0.2-13, magic-1.6-1, magick-2.8.1, MALDIquant-1.22.1, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.1.1, maptools-1.1-8, markdown-1.12, MASS-7.3-60, Matching-4.10-14, MatchIt-4.5.5, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.6-4, matrixcalc-1.0-6, MatrixModels-0.5-3, matrixStats-1.1.0, maxLik-1.5-2, maxlike-0.1-10, maxnet-0.1.4, mboost-2.9-9, mclogit-0.9.6, mclust-6.0.1, mcmc-0.9-8, MCMCpack-1.6-3, mcmcse-1.5-0, mda-0.5-4, medflex-0.6-10, mediation-4.5.0, memisc-0.99.31.6, memuse-4.2-3, MESS-0.5.12, metadat-1.2-0, metafor-4.4-0, MetaUtility-2.1.2, mets-1.3.3, mgcv-1.9-0, mgsub-1.7.3, mhsmm-0.4.21, mi-1.1, mice-3.16.0, miceadds-3.16-18, microbenchmark-1.4.10, MIIVsem-0.5.8, minerva-1.5.10, minpack.lm-1.2-4, minqa-1.2.6, mirt-1.41, misc3d-0.9-1, miscTools-0.6-28, missForest-1.5, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-3.1, mlegp-3.1.9, MLmetrics-1.1.1, mlogit-1.1-1, mlr-2.19.1, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.11, modeltools-0.2-23, momentfit-0.5, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.4.0, mpath-0.4-2.23, mRMRe-2.1.2.1, msm-1.7.1, mstate-0.3.2, multcomp-1.4-25, multcompView-0.1-9, multicool-1.0.0, multipol-1.0-9, munsell-0.5.0, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.2-4, nabor-0.5.0, naniar-1.0.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.22, NCmisc-1.2.0, network-1.18.2, networkDynamic-0.11.3, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-14, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.5, nlme-3.1-164, nloptr-2.0.3, NLP-0.2-1, nlsem-0.8-1, nnet-7.3-19, nnls-1.5, nonnest2-0.5-6, nor1mix-1.3-2, norm-1.0-11.1, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.8-2, OceanView-1.0.6, oddsratio-2.0.1, officer-0.6.3, openair-2.18-0, OpenMx-2.21.11, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2023-10.21, optmatch-0.10.7, optparse-1.7.3, ordinal-2023.12-4, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.3.2, outliers-0.15, packrat-0.9.2, pacman-0.5.1, pammtools-0.5.92, pamr-1.56.1, pan-1.9, parallelDist-0.2.6, parallelly-1.36.0, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-14, partykit-1.2-20, pastecs-1.3.21, patchwork-1.1.3, pbapply-1.7-2, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-4, pdp-0.8.1, PearsonDS-1.3.0, pec-2023.04.12, penalized-0.9-52, penfa-0.1.1, peperr-1.5, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.10, phytools-2.0-3, pim-2.0.2, pinfsc50-1.3.0, pixmap-0.4-12, pkgmaker-0.32.10, plogr-0.2.0, plot3D-1.4, plot3Drgl-1.0.4, plotly-4.10.3, plotmo-3.6.2, plotrix-3.8-4, pls-2.8-3, plyr-1.8.9, PMA-1.2-2, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.24, Polychrome-1.5.1, polyclip-1.10-6, polycor-0.8-1, polynom-1.4-1, posterior-1.5.0, ppcor-1.1, prabclus-2.3-3, pracma-2.4.4, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, princurve-2.1.6, pROC-1.18.5, prodlim-2023.08.28, profileModel-0.6.1, proftools-0.99-3, progress-1.2.3, progressr-0.14.0, projpred-2.7.0, proto-1.0.0, proxy-0.4-27, proxyC-0.3.4, pryr-0.1.6, pscl-1.5.5.1, pspline-1.0-19, psych-2.3.9, Publish-2023.01.17, pulsar-0.3.11, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.8, qqman-0.1.9, qrnn-2.1, quadprog-1.5-8, quanteda-3.3.1, quantmod-0.4.25, quantreg-5.97, questionr-0.7.8, QuickJSR-1.0.8, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.25.0, R.rsp-0.45.0, R.utils-2.12.3, R2WinBUGS-2.1-21, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.3, randtoolbox-2.0.4, rangeModelMetadata-0.1.5, ranger-0.16.0, RANN-2.6.1, rapidjsonr-1.2.0, rARPACK-0.11-0, raster-3.6-26, rasterVis-0.51.6, ratelimitr-0.4.1, RBesT-1.7-2, rbibutils-2.2.16, rbison-1.0.0, Rborist-0.3-5, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, RColorBrewer-1.1-3, RcppArmadillo-0.12.6.6.1, RcppEigen-0.3.3.9.4, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.6, RcppTOML-0.2.2, RCurl-1.98-1.13, rda-1.2-1, Rdpack-2.6, rdrop2-0.8.2.1, reactable-0.4.4, reactR-0.5.0, readbitmap-0.1.5, reader-1.0.6, readODS-2.1.0, readr-2.1.4, readxl-1.4.3, rebird-1.3.0, recipes-1.0.8, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.5, relsurv-2.2-9, rematch-2.0.0, rentrez-1.2.3, renv-1.0.3, reprex-2.0.2, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.34.0, rex-1.2.1, rgbif-3.7.8, RGCCA-3.0.2, rgdal-1.6-7, rgeos-0.6-4, rgexf-0.16.2, rgl-1.2.8, Rglpk-0.6-5, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.7, RInside-0.2.18, rio-1.0.1, riskRegression-2023.09.08, ritis-1.0.0, RItools-0.3-3, rJava-1.0-10, rjson-0.2.21, RJSONIO-1.3-1.9, rle-0.9.2, rlecuyer-0.3-8, rlemon-0.2.1, rlist-0.4.6.2, rmeta-3.0, Rmpfr-0.9-4, rms-6.7-1, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.5.1, robustbase-0.99-1, ROCR-1.0-11, ROI-1.0-1, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.4, roptim-0.1.6, rotl-3.1.0, rpact-3.4.0, rpart-4.1.23, rpf-1.0.14, RPMM-1.25, RPostgreSQL-0.7-5, rrcov-1.7-4, rredlist-0.7.1, rsample-1.2.0, rsconnect-1.1.1, Rserve-1.8-13, RSNNS-0.4-17, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.4, Rssa-1.0.5, rstan-2.32.3, rstantools-2.3.1.1, rstatix-0.7.2, rtdists-0.11-5, Rtsne-0.17, Rttf2pt1-1.3.12, RUnit-0.4.32, ruv-0.9.7.1, rvertnet-0.8.2, rvest-1.0.3, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.4, sampling-2.10, sandwich-3.0-2, SBdecomp-1.2, scales-1.3.0, scam-1.2-14, scatterpie-0.2.1, scatterplot3d-0.3-44, scs-3.2.4, sctransform-0.4.1, SDMTools-1.1-221.2, seewave-2.2.3, segmented-2.0-0, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-36, servr-0.27, setRNG-2022.4-1, sf-1.0-14, sfheaders-0.4.3, sfsmisc-1.1-16, shadowtext-0.1.2, shape-1.4.6, shapefiles-0.7.2, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-1.8-0, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.1, sm-2.2-5.7.1, smoof-1.6.0.3, smoother-1.1, sn-2.1.1, sna-2.7-2, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.1, snowfall-1.84-6.3, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sp-2.1-2, spaa-0.2.2, spam-2.10-0, spaMM-4.4.0, SparseM-1.81, SPAtest-3.1.2, spatial-7.3-17, spatstat-3.0-7, spatstat.core-2.4-4, spatstat.data-3.0-3, spatstat.explore-3.2-5, spatstat.geom-3.2-7, spatstat.linnet-3.1-3, spatstat.model-3.2-8, spatstat.random-3.2-2, spatstat.sparse-3.0-3, spatstat.utils-3.0-4, spData-2.3.0, spdep-1.3-1, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.2, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.26.28, stargazer-5.2.3, stars-0.6-4, startupmsg-0.9.6, StatMatch-1.4.1, statmod-1.5.0, statnet-2019.6, statnet.common-4.9.0, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.12, stringmagic-1.0.0, strucchange-1.5-3, styler-1.10.2, subplex-1.8, SuperLearner-2.0-28.1, SuppDists-1.1-9.7, survey-4.2-1, survival-3.5-7, survivalROC-1.0.3.1, svd-0.5.5, svglite-2.1.3, svUnit-1.0.6, swagger-3.33.1, symmoments-1.2.1, tableone-0.13.2, tabletools-0.1.0, tau-0.0-25, taxize-0.9.100, tcltk2-1.2-11, tclust-1.5-5, TeachingDemos-2.12, tensor-1.5, tensorA-0.36.2, tergm-4.2.0, terra-1.7-55, testit-0.13, textcat-1.0-8, textplot-0.2.2, TFisher-0.2.0, TH.data-1.1-2, threejs-0.3.3, tictoc-1.2, tidybayes-3.0.6, tidygraph-1.2.3, tidyr-1.3.0, tidyselect-1.2.0, tidytext-0.4.1, tidytree-0.4.5, tidyverse-2.0.0, tiff-0.1-12, timechange-0.2.0, timeDate-4022.108, timereg-2.0.5, tkrplot-0.0-27, tm-0.7-11, tmap-3.3-4, tmaptools-3.1-1, TMB-1.9.9, tmle-2.0.0, tmvnsim-1.0-2, tmvtnorm-1.6, tokenizers-0.3.0, topicmodels-0.2-15, TraMineR-2.2-8, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-9, trust-0.1-8, tseries-0.10-55, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.4, tuneR-1.4.6, twang-2.6, tweedie-2.3.5, tweenr-2.0.2, tzdb-0.4.0, ucminf-1.2.0, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-0.9.0, uniqueAtomMat-0.1-3-2, units-0.8-5, unmarked-1.3.2, UpSetR-1.4.0, urca-1.3-3, urltools-1.7.3, uroot-2.1-2, uuid-1.1-1, V8-4.4.1, varhandle-2.0.6, vcd-1.4-11, vcfR-1.15.0, vegan-2.6-4, VennDiagram-1.7.3, VGAM-1.1-9, VIM-6.2.2, VineCopula-2.5.0, vioplot-0.4.0, vipor-0.4.5, viridis-0.6.4, viridisLite-0.4.2, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.5, VSURF-1.2.0, warp-0.2.1, waveslim-1.8.4, wdm-0.2.4, webshot-0.5.5, webutils-1.2.0, weights-1.0.4, WeightSVM-1.7-13, wellknown-0.7.4, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.5.0, wikitaxa-0.4.0, wk-0.9.1, word2vec-0.4.0, wordcloud-2.6, worrms-0.4.3, writexl-1.4.2, WriteXLS-6.4.0, xgboost-1.7.6.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.16, xts-0.13.1, yaImpute-1.0-33, yulab.utils-0.1.0, zeallot-0.1.0, zoo-1.8-12

"},{"location":"available_software/detail/R/","title":"R","text":"

R is a free software environment for statistical computing and graphics.

https://www.r-project.org/

"},{"location":"available_software/detail/R/#available-modules","title":"Available modules","text":"

The overview below shows which R installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using R, load one of these modules using a module load command like:

module load R/4.4.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 R/4.4.1-gfbf-2023b x x x x x x x x x R/4.3.2-gfbf-2023a x x x x x x x x x R/4.2.2-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/R/#r441-gfbf-2023b","title":"R/4.4.1-gfbf-2023b","text":"

This is a list of extensions included in the module:

askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-10, brio-1.1.5, bslib-0.7.0, cachem-1.1.0, callr-3.7.6, cli-3.6.3, clipr-0.8.0, commonmark-1.9.1, compiler, cpp11-0.4.7, crayon-1.5.3, credentials-2.0.1, curl-5.2.1, datasets, desc-1.4.3, devtools-2.4.5, diffobj-0.3.5, digest-0.6.36, downlit-0.4.4, ellipsis-0.3.2, evaluate-0.24.0, fansi-1.0.6, fastmap-1.2.0, fontawesome-0.5.2, fs-1.6.4, gert-2.0.1, gh-1.4.1, gitcreds-0.1.2, glue-1.7.0, graphics, grDevices, grid, highr-0.11, htmltools-0.5.8.1, htmlwidgets-1.6.4, httpuv-1.6.15, httr-1.4.7, httr2-1.0.1, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.8, knitr-1.47, later-1.3.2, lifecycle-1.0.4, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.2.0, parallel, pillar-1.9.0, pkgbuild-1.4.4, pkgconfig-2.0.3, pkgdown-2.0.9, pkgload-1.3.4, praise-1.0.0, prettyunits-1.2.0, processx-3.8.4, profvis-0.3.8, promises-1.3.0, ps-1.7.6, purrr-1.0.2, R6-2.5.1, ragg-1.3.2, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.12, rematch2-2.1.2, remotes-2.5.0, rlang-1.1.4, rmarkdown-2.27, roxygen2-7.3.1, rprojroot-2.0.4, rstudioapi-0.16.0, rversions-2.1.2, sass-0.4.9, sessioninfo-1.2.2, shiny-1.8.1.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.8.4, stringr-1.5.1, sys-3.4.2, systemfonts-1.1.0, tcltk, testthat-3.2.1.1, textshaping-0.4.0, tibble-3.2.1, tinytex-0.51, tools, urlchecker-1.0.1, usethis-2.2.3, utf8-1.2.4, utils, vctrs-0.6.5, waldo-0.5.2, whisker-0.4.1, withr-3.0.0, xfun-0.45, xml2-1.3.6, xopen-1.0.1, xtable-1.8-4, yaml-2.3.8, zip-2.3.1

"},{"location":"available_software/detail/R/#r432-gfbf-2023a","title":"R/4.3.2-gfbf-2023a","text":"

This is a list of extensions included in the module:

askpass-1.2.0, base, base64enc-0.1-3, brew-1.0-8, brio-1.1.3, bslib-0.5.1, cachem-1.0.8, callr-3.7.3, cli-3.6.1, clipr-0.8.0, commonmark-1.9.0, compiler, cpp11-0.4.6, crayon-1.5.2, credentials-2.0.1, curl-5.1.0, datasets, desc-1.4.2, devtools-2.4.5, diffobj-0.3.5, digest-0.6.33, downlit-0.4.3, ellipsis-0.3.2, evaluate-0.23, fansi-1.0.5, fastmap-1.1.1, fontawesome-0.5.2, fs-1.6.3, gert-2.0.0, gh-1.4.0, gitcreds-0.1.2, glue-1.6.2, graphics, grDevices, grid, highr-0.10, htmltools-0.5.7, htmlwidgets-1.6.2, httpuv-1.6.12, httr-1.4.7, httr2-0.2.3, ini-0.3.1, jquerylib-0.1.4, jsonlite-1.8.7, knitr-1.45, later-1.3.1, lifecycle-1.0.3, magrittr-2.0.3, memoise-2.0.1, methods, mime-0.12, miniUI-0.1.1.1, openssl-2.1.1, parallel, pillar-1.9.0, pkgbuild-1.4.2, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.3, praise-1.0.0, prettyunits-1.2.0, processx-3.8.2, profvis-0.3.8, promises-1.2.1, ps-1.7.5, purrr-1.0.2, R6-2.5.1, ragg-1.2.6, rappdirs-0.3.3, rcmdcheck-1.4.0, Rcpp-1.0.11, rematch2-2.1.2, remotes-2.4.2.1, rlang-1.1.2, rmarkdown-2.25, roxygen2-7.2.3, rprojroot-2.0.4, rstudioapi-0.15.0, rversions-2.1.2, sass-0.4.7, sessioninfo-1.2.2, shiny-1.7.5.1, sourcetools-0.1.7-1, splines, stats, stats4, stringi-1.7.12, stringr-1.5.0, sys-3.4.2, systemfonts-1.0.5, tcltk, testthat-3.2.0, textshaping-0.3.7, tibble-3.2.1, tinytex-0.48, tools, urlchecker-1.0.1, usethis-2.2.2, utf8-1.2.4, utils, vctrs-0.6.4, waldo-0.5.2, whisker-0.4.1, withr-2.5.2, xfun-0.41, xml2-1.3.5, xopen-1.0.0, xtable-1.8-4, yaml-2.3.7, zip-2.3.0

"},{"location":"available_software/detail/R/#r422-foss-2022b","title":"R/4.2.2-foss-2022b","text":"

This is a list of extensions included in the module:

abc-2.2.1, abc.data-1.0, abe-3.0.1, abind-1.4-5, acepack-1.4.1, adabag-4.2, ade4-1.7-22, ADGofTest-0.3, admisc-0.31, aggregation-1.0.1, AICcmodavg-2.3-1, akima-0.6-3.4, alabama-2022.4-1, AlgDesign-1.2.1, alluvial-0.1-2, AMAPVox-1.0.0, animation-2.7, aod-1.3.2, apcluster-1.4.10, ape-5.7-1, aplot-0.1.10, argparse-2.2.2, aricode-1.0.2, arm-1.13-1, askpass-1.1, asnipe-1.1.16, assertive-0.3-6, assertive.base-0.0-9, assertive.code-0.0-3, assertive.data-0.0-3, assertive.data.uk-0.0-2, assertive.data.us-0.0-2, assertive.datetimes-0.0-3, assertive.files-0.0-2, assertive.matrices-0.0-2, assertive.models-0.0-2, assertive.numbers-0.0-2, assertive.properties-0.0-5, assertive.reflection-0.0-5, assertive.sets-0.0-3, assertive.strings-0.0-3, assertive.types-0.0-3, assertthat-0.2.1, AUC-0.3.2, audio-0.1-10, aws-2.5-1, awsMethods-1.1-1, backports-1.4.1, bacr-1.0.1, bartMachine-1.3.3.1, bartMachineJARs-1.2.1, base, base64-2.0.1, base64enc-0.1-3, BatchJobs-1.9, batchmeans-1.0-4, BayesianTools-0.1.8, BayesLogit-2.1, bayesm-3.1-5, BayesPen-1.0, bayesplot-1.10.0, BB-2019.10-1, BBmisc-1.13, bbmle-1.0.25, BCEE-1.3.1, BDgraph-2.72, bdsmatrix-1.3-6, beanplot-1.3.1, beeswarm-0.4.0, berryFunctions-1.22.0, betareg-3.1-4, BH-1.81.0-1, BiasedUrn-2.0.9, bibtex-0.5.1, bigD-0.2.0, BIGL-1.7.0, bigmemory-4.6.1, bigmemory.sri-0.1.6, bindr-0.1.1, bindrcpp-0.2.2, bio3d-2.4-4, biom-0.3.12, biomod2-4.2-2, bit-4.0.5, bit64-4.0.5, bitops-1.0-7, blavaan-0.4-7, blob-1.2.4, BMA-3.18.17, bmp-0.3, bnlearn-4.8.1, bold-1.2.0, boot-1.3-28.1, bootstrap-2019.6, Boruta-8.0.0, brew-1.0-8, brglm-0.7.2, bridgedist-0.1.2, bridgesampling-1.1-2, brio-1.1.3, brms-2.19.0, Brobdingnag-1.2-9, broom-1.0.4, broom.helpers-1.12.0, broom.mixed-0.2.9.4, bslib-0.4.2, bst-0.3-24, cachem-1.0.7, Cairo-1.6-0, calibrate-1.7.7, callr-3.7.3, car-3.1-1, carData-3.0-5, caret-6.0-93, catlearn-0.9.1, caTools-1.18.2, CBPS-0.23, celestial-1.4.6, cellranger-1.1.0, cgdsr-1.2.10, cghFLasso-0.2-1, changepoint-2.2.4, checkmate-2.1.0, chemometrics-1.4.2, chkptstanr-0.1.1, chron-2.3-60, circlize-0.4.15, circular-0.4-95, class-7.3-21, classInt-0.4-9, cld2-1.2.4, cli-3.6.0, clipr-0.8.0, clisymbols-1.2.0, clock-0.6.1, clue-0.3-64, cluster-2.1.4, clusterGeneration-1.3.7, clusterRepro-0.9, clustree-0.5.0, clValid-0.7, cmprsk-2.2-11, cNORM-3.0.2, cobalt-4.4.1, cobs-1.3-5, coda-0.19-4, codetools-0.2-19, coin-1.4-2, collapse-1.9.3, colorspace-2.1-0, colourpicker-1.2.0, combinat-0.0-8, ComICS-1.0.4, commonmark-1.8.1, compiler, ComplexUpset-1.3.3, compositions-2.0-5, CompQuadForm-1.4.3, conditionz-0.1.0, conflicted-1.2.0, conquer-1.3.3, contfrac-1.1-12, copCAR-2.0-4, copula-1.1-2, corpcor-1.6.10, corrplot-0.92, covr-3.6.1, CovSel-1.2.1, covsim-1.0.0, cowplot-1.1.1, coxed-0.3.3, coxme-2.2-18.1, cpp11-0.4.3, crayon-1.5.2, credentials-1.3.2, crfsuite-0.4.1, crosstalk-1.2.0, crul-1.3, cSEM-0.5.0, csSAM-1.2.4, ctmle-0.1.2, cubature-2.0.4.6, cubelyr-1.0.2, curl-5.0.0, cvAUC-1.1.4, CVST-0.2-3, CVXR-1.0-11, d3Network-0.5.2.1, dagitty-0.3-1, data.table-1.14.8, data.tree-1.0.0, DataCombine-0.2.21, datasets, date-1.2-42, dbarts-0.9-23, DBI-1.1.3, dbplyr-2.3.1, dbscan-1.1-11, dcurver-0.9.2, ddalpha-1.3.13, deal-1.2-42, debugme-1.1.0, deldir-1.0-6, dendextend-1.16.0, DEoptim-2.2-8, DEoptimR-1.0-11, DepthProc-2.1.5, Deriv-4.1.3, desc-1.4.2, DescTools-0.99.48, deSolve-1.35, devtools-2.4.5, dfidx-0.0-5, DHARMa-0.4.6, dHSIC-2.1, diagram-1.6.5, DiagrammeR-1.0.9, DiceKriging-1.6.0, dichromat-2.0-0.1, diffobj-0.3.5, digest-0.6.31, dimRed-0.2.6, diptest-0.76-0, DiscriMiner-0.1-29, dismo-1.3-9, distillery-1.2-1, distr-2.9.1, distrEx-2.9.0, distributional-0.3.1, DistributionUtils-0.6-0, diveRsity-1.9.90, dlm-1.1-6, DMCfun-2.0.2, doc2vec-0.2.0, docstring-1.0.0, doMC-1.3.8, doParallel-1.0.17, doRNG-1.8.6, doSNOW-1.0.20, dotCall64-1.0-2, downlit-0.4.2, downloader-0.4, dplyr-1.1.0, dr-3.0.10, drgee-1.1.10, DRR-0.0.4, drugCombo-1.2.1, DT-0.27, dtangle-2.0.9, dtplyr-1.3.0, DTRreg-1.7, dtw-1.23-1, dummies-1.5.6, dygraphs-1.1.1.6, dynamicTreeCut-1.63-1, e1071-1.7-13, earth-5.3.2, EasyABC-1.5.2, ECOSolveR-0.5.5, elementR-1.3.7, ellipse-0.4.3, ellipsis-0.3.2, elliptic-1.4-0, emdbook-1.3.12, emmeans-1.8.5, emoa-0.5-0.1, emulator-1.2-21, energy-1.7-11, ENMeval-2.0.4, entropy-1.3.1, EnvStats-2.7.0, epitools-0.5-10.1, ergm-4.4.0, ergm.count-4.1.1, estimability-1.4.1, evaluate-0.20, EValue-4.1.3, evd-2.3-6.1, Exact-3.2, expm-0.999-7, ExPosition-2.8.23, expsmooth-2.3, extrafont-0.19, extrafontdb-1.0, extRemes-2.1-3, FactoMineR-2.7, FactorCopula-0.9.3, fail-1.3, fansi-1.0.4, farver-2.1.1, fastcluster-1.2.3, fastDummies-1.6.3, fasterize-1.0.4, fastICA-1.2-3, fastmap-1.1.1, fastmatch-1.1-3, fdrtool-1.2.17, feather-0.3.5, ff-4.0.9, fftw-1.0-7, fftwtools-0.9-11, fields-14.1, filehash-2.4-5, finalfit-1.0.6, findpython-1.0.8, fishMod-0.29, fitdistrplus-1.1-8, FKSUM-1.0.1, flashClust-1.01-2, flexclust-1.4-1, flexmix-2.3-19, flextable-0.9.2, fma-2.5, FME-1.3.6.2, fmri-1.9.11, FNN-1.1.3.1, fontawesome-0.5.0, fontBitstreamVera-0.1.1, fontLiberation-0.1.0, fontquiver-0.2.1, forcats-1.0.0, foreach-1.5.2, forecast-8.21, foreign-0.8-84, formatR-1.14, Formula-1.2-5, formula.tools-1.7.1, fossil-0.4.0, fpc-2.2-10, fpp-0.5, fracdiff-1.5-2, fs-1.6.1, furrr-0.3.1, futile.logger-1.4.3, futile.options-1.0.1, future-1.32.0, future.apply-1.10.0, gam-1.22-1, gamlss-5.4-12, gamlss.data-6.0-2, gamlss.dist-6.0-5, gamlss.tr-5.1-7, gamm4-0.2-6, gap-1.5-1, gap.datasets-0.0.5, gapfill-0.9.6-1, gargle-1.3.0, gaussquad-1.0-3, gbm-2.1.8.1, gbRd-0.4-11, gclus-1.3.2, gdalUtilities-1.2.5, gdalUtils-2.0.3.2, gdata-2.18.0.1, gdistance-1.6, gdtools-0.3.3, gee-4.13-25, geeM-0.10.1, geepack-1.3.9, geex-1.1.1, geiger-2.0.10, GeneNet-1.2.16, generics-0.1.3, genoPlotR-0.8.11, GenSA-1.1.8, geojson-0.3.5, geojsonio-0.11.3, geojsonsf-2.0.3, geometries-0.2.2, geometry-0.4.7, gert-1.9.2, getopt-1.20.3, GetoptLong-1.0.5, gfonts-0.2.0, GGally-2.1.2, ggbeeswarm-0.7.1, ggdag-0.2.7, ggExtra-0.10.0, ggfan-0.1.3, ggforce-0.4.1, ggformula-0.10.2, ggfun-0.0.9, ggh4x-0.2.3, ggnetwork-0.5.12, ggplot2-3.4.1, ggplotify-0.1.0, ggpubr-0.6.0, ggraph-2.1.0, ggrepel-0.9.3, ggridges-0.5.4, ggsci-3.0.0, ggsignif-0.6.4, ggstance-0.3.6, ggvenn-0.1.9, ggvis-0.4.8, gh-1.4.0, GillespieSSA-0.6.2, git2r-0.31.0, gitcreds-0.1.2, GJRM-0.2-6.1, glasso-1.11, gld-2.6.6, gllvm-1.4.1, glmmML-1.1.4, glmmTMB-1.1.5, glmnet-4.1-6, GlobalOptions-0.1.2, globals-0.16.2, glue-1.6.2, gmm-1.7, gmodels-2.18.1.1, gmp-0.7-1, gnumeric-0.7-8, goftest-1.2-3, gomms-1.0, googledrive-2.0.0, googlesheets4-1.0.1, gower-1.0.1, GPArotation-2022.10-2, gplots-3.1.3, graphics, graphlayouts-0.8.4, grDevices, grf-2.2.1, grid, gridBase-0.4-7, gridExtra-2.3, gridGraphics-0.5-1, grImport2-0.2-0, grpreg-3.4.0, GSA-1.03.2, gsalib-2.2.1, gsl-2.1-8, gsw-1.1-1, gt-0.8.0, gtable-0.3.1, gtools-3.9.4, gtsummary-1.7.0, GUTS-1.2.3, gWidgets2-1.0-9, gWidgets2tcltk-1.0-8, GxEScanR-2.0.2, h2o-3.40.0.1, hal9001-0.4.3, haldensify-0.2.3, hardhat-1.2.0, harmony-0.1.1, hash-2.2.6.2, haven-2.5.2, hdf5r-1.3.8, hdm-0.3.1, heatmap3-1.1.9, here-1.0.1, hexbin-1.28.2, HGNChelper-0.8.1, HiddenMarkov-1.8-13, highr-0.10, Hmisc-5.0-1, hms-1.1.2, Hmsc-3.0-13, htmlTable-2.4.1, htmltools-0.5.4, htmlwidgets-1.6.1, httpcode-0.3.0, httpuv-1.6.9, httr-1.4.5, httr2-0.2.2, huge-1.3.5, hunspell-3.0.2, hwriter-1.3.2.1, HWxtest-1.1.9, hypergeo-1.2-13, ica-1.0-3, IDPmisc-1.1.20, idr-1.3, ids-1.0.1, ie2misc-0.9.0, igraph-1.4.1, image.binarization-0.1.3, imager-0.42.18, imagerExtra-1.3.2, ineq-0.2-13, influenceR-0.1.0.1, infotheo-1.2.0.1, ini-0.3.1, inline-0.3.19, intergraph-2.0-2, interp-1.1-3, interpretR-0.2.4, intrinsicDimension-1.2.0, inum-1.0-5, ipred-0.9-14, irace-3.5, irlba-2.3.5.1, ismev-1.42, Iso-0.0-18.1, isoband-0.2.7, ISOcodes-2022.09.29, ISOweek-0.6-2, iterators-1.0.14, itertools-0.1-3, JADE-2.0-3, janeaustenr-1.0.0, JBTools-0.7.2.9, jiebaR-0.11, jiebaRD-0.1, jomo-2.7-4, jpeg-0.1-10, jqr-1.3.1, jquerylib-0.1.4, jsonify-1.2.2, jsonlite-1.8.4, jstable-1.0.7, juicyjuice-0.1.0, kde1d-1.0.5, kedd-1.0.3, kernlab-0.9-32, KernSmooth-2.23-20, kinship2-1.9.6, klaR-1.7-1, knitr-1.42, KODAMA-2.4, kohonen-3.0.11, ks-1.14.0, labdsv-2.0-1, labeling-0.4.2, labelled-2.10.0, laeken-0.5.2, lambda.r-1.2.4, LaplacesDemon-16.1.6, lars-1.3, lassosum-0.4.5, later-1.3.0, lattice-0.20-45, latticeExtra-0.6-30, lava-1.7.2.1, lavaan-0.6-15, lazy-1.2-18, lazyeval-0.2.2, LCFdata-2.0, lda-1.4.2, ldbounds-2.0.0, leafem-0.2.0, leaflet-2.1.2, leaflet.providers-1.9.0, leafsync-0.1.0, leaps-3.1, LearnBayes-2.15.1, leiden-0.4.3, lhs-1.1.6, libcoin-1.0-9, lifecycle-1.0.3, limSolve-1.5.6, linkcomm-1.0-14, linprog-0.9-4, liquidSVM-1.2.4, listenv-0.9.0, lme4-1.1-32, LMERConvenienceFunctions-3.0, lmerTest-3.1-3, lmom-2.9, Lmoments-1.3-1, lmtest-0.9-40, lobstr-1.1.2, locfdr-1.1-8, locfit-1.5-9.7, logcondens-2.1.7, logger-0.2.2, logistf-1.24.1, logspline-2.1.19, longitudinal-1.1.13, longmemo-1.1-2, loo-2.5.1, lpSolve-5.6.18, lpSolveAPI-5.5.2.0-17.9, lqa-1.0-3, lsei-1.3-0, lslx-0.6.11, lubridate-1.9.2, lwgeom-0.2-11, magic-1.6-1, magick-2.7.4, magrittr-2.0.3, MALDIquant-1.22, manipulateWidget-0.11.1, mapproj-1.2.11, maps-3.4.1, maptools-1.1-6, markdown-1.5, MASS-7.3-58.3, Matching-4.10-8, MatchIt-4.5.1, mathjaxr-1.6-0, matlab-1.0.4, Matrix-1.5-3, matrixcalc-1.0-6, MatrixModels-0.5-1, matrixStats-0.63.0, maxLik-1.5-2, maxlike-0.1-9, maxnet-0.1.4, mboost-2.9-7, mclogit-0.9.6, mclust-6.0.0, mcmc-0.9-7, MCMCpack-1.6-3, mcmcse-1.5-0, mda-0.5-3, medflex-0.6-7, mediation-4.5.0, memisc-0.99.31.6, memoise-2.0.1, memuse-4.2-3, MESS-0.5.9, metadat-1.2-0, metafor-3.8-1, MetaUtility-2.1.2, methods, mets-1.3.2, mgcv-1.8-42, mgsub-1.7.3, mhsmm-0.4.16, mi-1.1, mice-3.15.0, miceadds-3.16-18, microbenchmark-1.4.9, MIIVsem-0.5.8, mime-0.12, minerva-1.5.10, miniUI-0.1.1.1, minpack.lm-1.2-3, minqa-1.2.5, mirt-1.38.1, misc3d-0.9-1, miscTools-0.6-26, missForest-1.5, mitml-0.4-5, mitools-2.4, mixtools-2.0.0, mlbench-2.1-3, mlegp-3.1.9, MLmetrics-1.1.1, mlogit-1.1-1, mlr-2.19.1, mlrMBO-1.1.5.1, mltools-0.3.5, mnormt-2.1.1, ModelMetrics-1.2.2.2, modelr-0.1.10, modeltools-0.2-23, MODIStsp-2.1.0, momentfit-0.3, moments-0.14.1, MonteCarlo-1.0.6, mosaicCore-0.9.2.1, mpath-0.4-2.23, mRMRe-2.1.2, msm-1.7, mstate-0.3.2, multcomp-1.4-23, multcompView-0.1-8, multicool-0.1-12, multipol-1.0-7, munsell-0.5.0, mvabund-4.2.1, mvnfast-0.2.8, mvtnorm-1.1-3, nabor-0.5.0, naniar-1.0.0, natserv-1.0.0, naturalsort-0.1.3, ncbit-2013.03.29.1, ncdf4-1.21, NCmisc-1.2.0, network-1.18.1, networkDynamic-0.11.3, networkLite-1.0.5, neuralnet-1.44.2, neuRosim-0.2-13, ngspatial-1.2-2, NISTunits-1.0.1, nleqslv-3.3.4, nlme-3.1-162, nloptr-2.0.3, NLP-0.2-1, nlsem-0.8, nnet-7.3-18, nnls-1.4, nonnest2-0.5-5, nor1mix-1.3-0, norm-1.0-10.0, nortest-1.0-4, np-0.60-17, npsurv-0.5-0, numDeriv-2016.8-1.1, oai-0.4.0, oce-1.7-10, OceanView-1.0.6, oddsratio-2.0.1, officer-0.6.2, openair-2.16-0, OpenMx-2.21.1, openssl-2.0.6, openxlsx-4.2.5.2, operator.tools-1.6.3, optextras-2019-12.4, optimParallel-1.0-2, optimr-2019-12.16, optimx-2022-4.30, optmatch-0.10.6, optparse-1.7.3, ordinal-2022.11-16, origami-1.0.7, oro.nifti-0.11.4, orthopolynom-1.0-6.1, osqp-0.6.0.8, outliers-0.15, packrat-0.9.1, pacman-0.5.1, pammtools-0.5.8, pamr-1.56.1, pan-1.6, parallel, parallelDist-0.2.6, parallelly-1.34.0, parallelMap-1.5.1, ParamHelpers-1.14.1, parsedate-1.3.1, party-1.3-13, partykit-1.2-18, pastecs-1.3.21, patchwork-1.1.2, pbapply-1.7-0, pbivnorm-0.6.0, pbkrtest-0.5.2, PCAmatchR-0.3.3, pcaPP-2.0-3, pdp-0.8.1, PearsonDS-1.2.3, pec-2022.05.04, penalized-0.9-52, penfa-0.1.1, peperr-1.4, PermAlgo-1.2, permute-0.9-7, phangorn-2.11.1, pheatmap-1.0.12, phylobase-0.8.10, phytools-1.5-1, pillar-1.8.1, pim-2.0.2, pinfsc50-1.2.0, pixmap-0.4-12, pkgbuild-1.4.0, pkgconfig-2.0.3, pkgdown-2.0.7, pkgload-1.3.2, pkgmaker-0.32.8, plogr-0.2.0, plot3D-1.4, plot3Drgl-1.0.4, plotly-4.10.1, plotmo-3.6.2, plotrix-3.8-2, pls-2.8-1, plyr-1.8.8, PMA-1.2.1, png-0.1-8, PoissonSeq-1.1.2, poLCA-1.6.0.1, polspline-1.1.22, Polychrome-1.5.1, polyclip-1.10-4, polycor-0.8-1, polynom-1.4-1, posterior-1.4.1, ppcor-1.1, prabclus-2.3-2, pracma-2.4.2, praise-1.0.0, PresenceAbsence-1.1.11, preseqR-4.0.0, prettyGraphs-2.1.6, prettyunits-1.1.1, princurve-2.1.6, pROC-1.18.0, processx-3.8.0, prodlim-2019.11.13, profileModel-0.6.1, proftools-0.99-3, profvis-0.3.7, progress-1.2.2, progressr-0.13.0, projpred-2.4.0, promises-1.2.0.1, proto-1.0.0, protolite-2.3.0, proxy-0.4-27, proxyC-0.3.3, pryr-0.1.6, ps-1.7.2, pscl-1.5.5, pspline-1.0-19, psych-2.2.9, Publish-2023.01.17, pulsar-0.3.10, purrr-1.0.1, pvclust-2.2-0, qgam-1.3.4, qgraph-1.9.3, qqman-0.1.8, qrnn-2.0.5, quadprog-1.5-8, quanteda-3.3.0, quantmod-0.4.20, quantreg-5.94, questionr-0.7.8, R.cache-0.16.0, R.matlab-3.7.0, R.methodsS3-1.8.2, R.oo-1.25.0, R.rsp-0.45.0, R.utils-2.12.2, R2WinBUGS-2.1-21, R6-2.5.1, ragg-1.2.5, random-0.2.6, randomForest-4.7-1.1, randomForestSRC-3.2.1, randtoolbox-2.0.4, rangeModelMetadata-0.1.4, ranger-0.14.1, RANN-2.6.1, rapidjsonr-1.2.0, rappdirs-0.3.3, rARPACK-0.11-0, raster-3.6-20, rasterVis-0.51.5, ratelimitr-0.4.1, RBesT-1.6-6, rbibutils-2.2.13, rbison-1.0.0, Rborist-0.3-2, RCAL-2.0, Rcgmin-2022-4.30, RCircos-1.2.2, rcmdcheck-1.4.0, RColorBrewer-1.1-3, Rcpp-1.0.10, RcppArmadillo-0.12.0.1.0, RcppEigen-0.3.3.9.3, RcppGSL-0.3.13, RcppParallel-5.1.7, RcppProgress-0.4.2, RcppRoll-0.3.0, RcppThread-2.1.3, RcppTOML-0.2.2, RCurl-1.98-1.10, rda-1.2-1, Rdpack-2.4, rdrop2-0.8.2.1, readbitmap-0.1.5, reader-1.0.6, readODS-1.8.0, readr-2.1.4, readxl-1.4.2, rebird-1.3.0, recipes-1.0.5, RefFreeEWAS-2.2, registry-0.5-1, regsem-1.9.3, relsurv-2.2-9, rematch-1.0.1, rematch2-2.1.2, remotes-2.4.2, rentrez-1.2.3, renv-0.17.1, reprex-2.0.2, resample-0.6, reshape-0.8.9, reshape2-1.4.4, reticulate-1.28, rex-1.2.1, rgbif-3.7.5, RGCCA-2.1.2, rgdal-1.6-5, rgeos-0.6-2, rgexf-0.16.2, rgl-1.0.1, Rglpk-0.6-4, RhpcBLASctl-0.23-42, ridge-3.3, ridigbio-0.3.6, RInside-0.2.18, rio-0.5.29, riskRegression-2022.11.28, ritis-1.0.0, RItools-0.3-3, rJava-1.0-6, rjson-0.2.21, RJSONIO-1.3-1.8, rlang-1.1.0, rle-0.9.2, rlecuyer-0.3-5, rlemon-0.2.1, rlist-0.4.6.2, rmarkdown-2.20, rmeta-3.0, Rmpfr-0.9-1, rms-6.5-0, RMTstat-0.3.1, rncl-0.8.7, rnetcarto-0.2.6, RNeXML-2.4.11, rngtools-1.5.2, rngWELL-0.10-9, RNifti-1.4.5, robustbase-0.95-0, ROCR-1.0-11, ROI-1.0-0, ROI.plugin.glpk-1.0-0, Rook-1.2, rootSolve-1.8.2.3, roptim-0.1.6, rotl-3.0.14, roxygen2-7.2.3, rpact-3.3.4, rpart-4.1.19, rpf-1.0.11, RPMM-1.25, rprojroot-2.0.3, rrcov-1.7-2, rredlist-0.7.1, rsample-1.1.1, rsconnect-0.8.29, Rserve-1.8-11, RSNNS-0.4-15, Rsolnp-1.16, RSpectra-0.16-1, RSQLite-2.3.0, Rssa-1.0.5, rstan-2.21.8, rstantools-2.3.0, rstatix-0.7.2, rstudioapi-0.14, rtdists-0.11-5, Rtsne-0.16, Rttf2pt1-1.3.12, RUnit-0.4.32, ruv-0.9.7.1, rversions-2.1.2, rvertnet-0.8.2, rvest-1.0.3, rvinecopulib-0.6.3.1.1, Rvmmin-2018-4.17.1, RWeka-0.4-46, RWekajars-3.9.3-2, s2-1.1.2, sampling-2.9, sandwich-3.0-2, sass-0.4.5, SBdecomp-1.2, scales-1.2.1, scam-1.2-13, scatterpie-0.1.8, scatterplot3d-0.3-43, scs-3.2.4, sctransform-0.3.5, SDMTools-1.1-221.2, seewave-2.2.0, segmented-1.6-2, selectr-0.4-2, sem-3.1-15, semPLS-1.0-10, semTools-0.5-6, sendmailR-1.4-0, sensemakr-0.1.4, sentometrics-1.0.0, seqinr-4.2-23, servr-0.25, sessioninfo-1.2.2, setRNG-2022.4-1, sf-1.0-11, sfheaders-0.4.2, sfsmisc-1.1-14, shadowtext-0.1.2, shape-1.4.6, shapefiles-0.7.2, shiny-1.7.4, shinycssloaders-1.0.0, shinydashboard-0.7.2, shinyjs-2.1.0, shinystan-2.6.0, shinythemes-1.2.0, signal-0.7-7, SignifReg-4.3, simex-1.8, SimSeq-1.4.0, SKAT-2.2.5, slam-0.1-50, slider-0.3.0, sm-2.2-5.7.1, smoof-1.6.0.3, smoother-1.1, sn-2.1.0, sna-2.7-1, SNFtool-2.3.1, snow-0.4-4, SnowballC-0.7.0, snowfall-1.84-6.2, SOAR-0.99-11, solrium-1.2.0, som-0.3-5.1, soundecology-1.3.3, sourcetools-0.1.7-1, sp-1.6-0, spaa-0.2.2, spam-2.9-1, spaMM-4.2.1, SparseM-1.81, SPAtest-3.1.2, spatial-7.3-16, spatstat-3.0-3, spatstat.core-2.4-4, spatstat.data-3.0-1, spatstat.explore-3.1-0, spatstat.geom-3.1-0, spatstat.linnet-3.0-6, spatstat.model-3.2-1, spatstat.random-3.1-4, spatstat.sparse-3.0-1, spatstat.utils-3.0-2, spData-2.2.2, splines, splitstackshape-1.4.8, spls-2.2-3, spocc-1.2.1, spThin-0.2.0, SQUAREM-2021.1, stabledist-0.7-1, stabs-0.6-4, StanHeaders-2.21.0-7, stargazer-5.2.3, stars-0.6-0, startupmsg-0.9.6, StatMatch-1.4.1, statmod-1.5.0, statnet-2019.6, statnet.common-4.8.0, stats, stats4, stdReg-3.4.1, stopwords-2.3, stringdist-0.9.10, stringi-1.7.12, stringr-1.5.0, strucchange-1.5-3, styler-1.9.1, subplex-1.8, SuperLearner-2.0-28, SuppDists-1.1-9.7, survey-4.1-1, survival-3.5-5, survivalROC-1.0.3.1, svd-0.5.3, svglite-2.1.1, swagger-3.33.1, symmoments-1.2.1, sys-3.4.1, systemfonts-1.0.4, tableone-0.13.2, tabletools-0.1.0, tau-0.0-24, taxize-0.9.100, tcltk, tcltk2-1.2-11, tclust-1.5-2, TeachingDemos-2.12, tensor-1.5, tensorA-0.36.2, tergm-4.1.1, terra-1.7-18, testit-0.13, testthat-3.1.7, textcat-1.0-8, textplot-0.2.2, textshaping-0.3.6, TFisher-0.2.0, TH.data-1.1-1, threejs-0.3.3, tibble-3.2.0, tictoc-1.1, tidygraph-1.2.3, tidyr-1.3.0, tidyselect-1.2.0, tidytext-0.4.1, tidytree-0.4.2, tidyverse-2.0.0, tiff-0.1-11, timechange-0.2.0, timeDate-4022.108, timereg-2.0.5, tinytex-0.44, tkrplot-0.0-27, tm-0.7-11, tmap-3.3-3, tmaptools-3.1-1, TMB-1.9.2, tmle-1.5.0.2, tmvnsim-1.0-2, tmvtnorm-1.5, tokenizers-0.3.0, tools, topicmodels-0.2-13, TraMineR-2.2-6, tree-1.0-43, triebeard-0.4.1, trimcluster-0.1-5, tripack-1.3-9.1, TruncatedNormal-2.2.2, truncnorm-1.0-8, trust-0.1-8, tseries-0.10-53, tseriesChaos-0.1-13.1, tsna-0.3.5, tsne-0.1-3.1, TTR-0.24.3, tuneR-1.4.3, twang-2.5, tweedie-2.3.5, tweenr-2.0.2, tzdb-0.3.0, ucminf-1.1-4.1, udpipe-0.8.11, umap-0.2.10.0, unbalanced-2.0, unikn-0.8.0, uniqueAtomMat-0.1-3-2, units-0.8-1, unmarked-1.2.5, UpSetR-1.4.0, urca-1.3-3, urlchecker-1.0.1, urltools-1.7.3, uroot-2.1-2, usethis-2.1.6, utf8-1.2.3, utils, uuid-1.1-0, V8-4.2.2, varhandle-2.0.5, vcd-1.4-11, vcfR-1.14.0, vctrs-0.6.0, vegan-2.6-4, VennDiagram-1.7.3, VGAM-1.1-8, VIM-6.2.2, VineCopula-2.4.5, vioplot-0.4.0, vipor-0.4.5, viridis-0.6.2, viridisLite-0.4.1, visdat-0.6.0, visNetwork-2.1.2, vroom-1.6.1, VSURF-1.2.0, waldo-0.4.0, warp-0.2.0, waveslim-1.8.4, wdm-0.2.3, webshot-0.5.4, webutils-1.1, weights-1.0.4, WeightSVM-1.7-11, wellknown-0.7.4, whisker-0.4.1, widgetframe-0.3.1, WikidataQueryServiceR-1.0.0, WikidataR-2.3.3, WikipediR-1.5.0, wikitaxa-0.4.0, withr-2.5.0, wk-0.7.1, word2vec-0.3.4, wordcloud-2.6, worrms-0.4.2, WriteXLS-6.4.0, xfun-0.37, xgboost-1.7.3.1, xlsx-0.6.5, xlsxjars-0.6.1, XML-3.99-0.13, xml2-1.3.3, xopen-1.0.0, xtable-1.8-4, xts-0.13.0, yaImpute-1.0-33, yaml-2.3.7, yulab.utils-0.0.6, zeallot-0.1.0, zip-2.2.2, zoo-1.8-11

"},{"location":"available_software/detail/RE2/","title":"RE2","text":"

RE2 is a fast, safe, thread-friendly alternative to backtracking regularexpression engines like those used in PCRE, Perl, and Python. It is a C++library.

https://github.com/google/re2

"},{"location":"available_software/detail/RE2/#available-modules","title":"Available modules","text":"

The overview below shows which RE2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using RE2, load one of these modules using a module load command like:

module load RE2/2024-03-01-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 RE2/2024-03-01-GCCcore-13.2.0 x x x x x x x x x RE2/2023-08-01-GCCcore-12.3.0 x x x x x x x x x RE2/2023-03-01-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ROOT/","title":"ROOT","text":"

The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficient way.

https://root.cern.ch

"},{"location":"available_software/detail/ROOT/#available-modules","title":"Available modules","text":"

The overview below shows which ROOT installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ROOT, load one of these modules using a module load command like:

module load ROOT/6.30.06-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ROOT/6.30.06-foss-2023a x x x x x x x x x ROOT/6.26.10-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/RapidJSON/","title":"RapidJSON","text":"

A fast JSON parser/generator for C++ with both SAX/DOM style API

https://rapidjson.org

"},{"location":"available_software/detail/RapidJSON/#available-modules","title":"Available modules","text":"

The overview below shows which RapidJSON installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using RapidJSON, load one of these modules using a module load command like:

module load RapidJSON/1.1.0-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 RapidJSON/1.1.0-GCCcore-12.2.0 x x x x x x - x x RapidJSON/1.1.0-20240409-GCCcore-13.2.0 x x x x x x x x x RapidJSON/1.1.0-20230928-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Raptor/","title":"Raptor","text":"

Set of parsers and serializers that generate Resource Description Framework(RDF) triples by parsing syntaxes or serialize the triples into a syntax.

https://librdf.org/raptor/

"},{"location":"available_software/detail/Raptor/#available-modules","title":"Available modules","text":"

The overview below shows which Raptor installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Raptor, load one of these modules using a module load command like:

module load Raptor/2.0.16-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Raptor/2.0.16-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Rasqal/","title":"Rasqal","text":"

A library handling RDF query syntaxes, construction and execution

hhttps://librdf.org/rasqal

"},{"location":"available_software/detail/Rasqal/#available-modules","title":"Available modules","text":"

The overview below shows which Rasqal installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rasqal, load one of these modules using a module load command like:

module load Rasqal/0.9.33-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rasqal/0.9.33-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ReFrame/","title":"ReFrame","text":"

ReFrame is a framework for writing regression tests for HPC systems.

https://github.com/reframe-hpc/reframe

"},{"location":"available_software/detail/ReFrame/#available-modules","title":"Available modules","text":"

The overview below shows which ReFrame installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ReFrame, load one of these modules using a module load command like:

module load ReFrame/4.6.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ReFrame/4.6.2 x x x x x x x x x ReFrame/4.3.3 x x x x x x x x x"},{"location":"available_software/detail/ReFrame/#reframe462","title":"ReFrame/4.6.2","text":"

This is a list of extensions included in the module:

pip-24.0, reframe-4.6.2, setuptools-68.0.0, wheel-0.42.0

"},{"location":"available_software/detail/ReFrame/#reframe433","title":"ReFrame/4.3.3","text":"

This is a list of extensions included in the module:

pip-21.3.1, reframe-4.3.3, wheel-0.37.1

"},{"location":"available_software/detail/Redland/","title":"Redland","text":"

Redland is a set of free software C libraries that provide support for the Resource Description Framework (RDF).

https://librdf.org/raptor

"},{"location":"available_software/detail/Redland/#available-modules","title":"Available modules","text":"

The overview below shows which Redland installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Redland, load one of these modules using a module load command like:

module load Redland/1.0.17-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Redland/1.0.17-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Rivet/","title":"Rivet","text":"

Rivet toolkit (Robust Independent Validation of Experiment and Theory)To use your own analysis you must append the path to RIVET_ANALYSIS_PATH.

https://gitlab.com/hepcedar/rivet

"},{"location":"available_software/detail/Rivet/#available-modules","title":"Available modules","text":"

The overview below shows which Rivet installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rivet, load one of these modules using a module load command like:

module load Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rivet/3.1.9-gompi-2023a-HepMC3-3.2.6 x x x x x x x x x"},{"location":"available_software/detail/Ruby/","title":"Ruby","text":"

Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.

https://www.ruby-lang.org

"},{"location":"available_software/detail/Ruby/#available-modules","title":"Available modules","text":"

The overview below shows which Ruby installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Ruby, load one of these modules using a module load command like:

module load Ruby/3.3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Ruby/3.3.0-GCCcore-12.3.0 x x x x x x x x x Ruby/3.2.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Ruby/#ruby322-gcccore-1220","title":"Ruby/3.2.2-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

activesupport-5.2.8.1, addressable-2.8.4, arr-pm-0.0.12, backports-3.24.1, bundler-2.4.14, cabin-0.9.0, childprocess-4.1.0, clamp-1.3.2, concurrent-ruby-1.2.2, connection_pool-2.4.1, diff-lcs-1.5.0, ethon-0.16.0, faraday-1.2.0, faraday-net_http-3.0.2, faraday_middleware-1.2.0, ffi-1.15.5, gh-0.18.0, highline-2.1.0, i18n-1.14.1, json-2.6.3, launchy-2.5.2, minitest-5.18.0, multi_json-1.15.0, multipart-post-2.3.0, mustermann-3.0.0, net-http-persistent-2.9.4, net-http-pipeline-1.0.1, public_suffix-5.0.1, pusher-client-0.6.2, rack-2.2.4, rack-protection-3.0.6, rack-test-2.1.0, rspec-3.12.0, rspec-core-3.12.2, rspec-expectations-3.12.3, rspec-mocks-3.12.5, rspec-support-3.12.0, ruby2_keywords-0.0.5, sinatra-3.0.6, thread_safe-0.3.6, tilt-2.2.0, typhoeus-1.4.0, tzinfo-1.1.0, websocket-1.2.9, zeitwerk-2.6.8

"},{"location":"available_software/detail/Rust/","title":"Rust","text":"

Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.

https://www.rust-lang.org

"},{"location":"available_software/detail/Rust/#available-modules","title":"Available modules","text":"

The overview below shows which Rust installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Rust, load one of these modules using a module load command like:

module load Rust/1.76.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Rust/1.76.0-GCCcore-13.2.0 x x x x x x x x x Rust/1.75.0-GCCcore-12.3.0 x x x x x x x x x Rust/1.73.0-GCCcore-13.2.0 x x x x x x x x x Rust/1.70.0-GCCcore-12.3.0 x x x x x x x x x Rust/1.65.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SAMtools/","title":"SAMtools","text":"

SAM Tools provide various utilities for manipulating alignments in the SAM format, including sorting, merging, indexing and generating alignments in a per-position format.

https://www.htslib.org/

"},{"location":"available_software/detail/SAMtools/#available-modules","title":"Available modules","text":"

The overview below shows which SAMtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SAMtools, load one of these modules using a module load command like:

module load SAMtools/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SAMtools/1.18-GCC-12.3.0 x x x x x x x x x SAMtools/1.17-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SCOTCH/","title":"SCOTCH","text":"

Software package and libraries for sequential and parallel graph partitioning,static mapping, and sparse matrix block ordering, and sequential mesh and hypergraph partitioning.

https://www.labri.fr/perso/pelegrin/scotch/

"},{"location":"available_software/detail/SCOTCH/#available-modules","title":"Available modules","text":"

The overview below shows which SCOTCH installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SCOTCH, load one of these modules using a module load command like:

module load SCOTCH/7.0.3-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SCOTCH/7.0.3-gompi-2023a x x x x x x x x x SCOTCH/7.0.3-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/SDL2/","title":"SDL2","text":"

SDL: Simple DirectMedia Layer, a cross-platform multimedia library

https://www.libsdl.org/

"},{"location":"available_software/detail/SDL2/#available-modules","title":"Available modules","text":"

The overview below shows which SDL2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SDL2, load one of these modules using a module load command like:

module load SDL2/2.28.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SDL2/2.28.5-GCCcore-13.2.0 x x x x x x x x x SDL2/2.28.2-GCCcore-12.3.0 x x x x x x x x x SDL2/2.26.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/SEPP/","title":"SEPP","text":"

SATe-enabled Phylogenetic Placement - addresses the problem of phylogeneticplacement of short reads into reference alignments and trees.

https://github.com/smirarab/sepp

"},{"location":"available_software/detail/SEPP/#available-modules","title":"Available modules","text":"

The overview below shows which SEPP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SEPP, load one of these modules using a module load command like:

module load SEPP/4.5.1-foss-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SEPP/4.5.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/SIONlib/","title":"SIONlib","text":"

SIONlib is a scalable I/O library for parallel access to task-local files. The library not only supports writing and reading binary data to or from several thousands of processors into a single or a small number of physical files, but also provides global open and close functions to access SIONlib files in parallel. This package provides a stripped-down installation of SIONlib for use with performance tools (e.g., Score-P), with renamed symbols to avoid conflicts when an application using SIONlib itself is linked against a tool requiring a different SIONlib version.

https://www.fz-juelich.de/ias/jsc/EN/Expertise/Support/Software/SIONlib/_node.html

"},{"location":"available_software/detail/SIONlib/#available-modules","title":"Available modules","text":"

The overview below shows which SIONlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SIONlib, load one of these modules using a module load command like:

module load SIONlib/1.7.7-GCCcore-13.2.0-tools\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIONlib/1.7.7-GCCcore-13.2.0-tools x x x x x x x x x"},{"location":"available_software/detail/SIP/","title":"SIP","text":"

SIP is a tool that makes it very easy to create Python bindings for C and C++ libraries.

http://www.riverbankcomputing.com/software/sip/

"},{"location":"available_software/detail/SIP/#available-modules","title":"Available modules","text":"

The overview below shows which SIP installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SIP, load one of these modules using a module load command like:

module load SIP/6.8.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SIP/6.8.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/SLEPc/","title":"SLEPc","text":"

SLEPc (Scalable Library for Eigenvalue Problem Computations) is a software library for the solution of large scale sparse eigenvalue problems on parallel computers. It is an extension of PETSc and can be used for either standard or generalized eigenproblems, with real or complex arithmetic. It can also be used for computing a partial SVD of a large, sparse, rectangular matrix, and to solve quadratic eigenvalue problems.

https://slepc.upv.es

"},{"location":"available_software/detail/SLEPc/#available-modules","title":"Available modules","text":"

The overview below shows which SLEPc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SLEPc, load one of these modules using a module load command like:

module load SLEPc/3.20.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SLEPc/3.20.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SQLAlchemy/","title":"SQLAlchemy","text":"

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that givesapplication developers the full power and flexibility of SQL. SQLAlchemyprovides a full suite of well known enterprise-level persistence patterns,designed for efficient and high-performing database access, adapted into asimple and Pythonic domain language.

https://www.sqlalchemy.org/

"},{"location":"available_software/detail/SQLAlchemy/#available-modules","title":"Available modules","text":"

The overview below shows which SQLAlchemy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SQLAlchemy, load one of these modules using a module load command like:

module load SQLAlchemy/2.0.25-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLAlchemy/2.0.25-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/SQLAlchemy/#sqlalchemy2025-gcccore-1230","title":"SQLAlchemy/2.0.25-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

alembic-1.13.1, async-timeout-4.0.3, asyncpg-0.29.0, greenlet-3.0.3, SQLAlchemy-2.0.25

"},{"location":"available_software/detail/SQLite/","title":"SQLite","text":"

SQLite: SQL Database Engine in a C Library

https://www.sqlite.org/

"},{"location":"available_software/detail/SQLite/#available-modules","title":"Available modules","text":"

The overview below shows which SQLite installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SQLite, load one of these modules using a module load command like:

module load SQLite/3.43.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SQLite/3.43.1-GCCcore-13.2.0 x x x x x x x x x SQLite/3.42.0-GCCcore-12.3.0 x x x x x x x x x SQLite/3.39.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/STAR/","title":"STAR","text":"

STAR aligns RNA-seq reads to a reference genome using uncompressed suffix arrays.

https://github.com/alexdobin/STAR

"},{"location":"available_software/detail/STAR/#available-modules","title":"Available modules","text":"

The overview below shows which STAR installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using STAR, load one of these modules using a module load command like:

module load STAR/2.7.11b-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 STAR/2.7.11b-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/SWIG/","title":"SWIG","text":"

SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages.

http://www.swig.org/

"},{"location":"available_software/detail/SWIG/#available-modules","title":"Available modules","text":"

The overview below shows which SWIG installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SWIG, load one of these modules using a module load command like:

module load SWIG/4.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SWIG/4.1.1-GCCcore-13.2.0 x x x x x x x x x SWIG/4.1.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/ScaFaCoS/","title":"ScaFaCoS","text":"

ScaFaCoS is a library of scalable fast coulomb solvers.

http://www.scafacos.de/

"},{"location":"available_software/detail/ScaFaCoS/#available-modules","title":"Available modules","text":"

The overview below shows which ScaFaCoS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ScaFaCoS, load one of these modules using a module load command like:

module load ScaFaCoS/1.0.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaFaCoS/1.0.4-foss-2023b - - - x x x x x x ScaFaCoS/1.0.4-foss-2023a - - - x x x x x x"},{"location":"available_software/detail/ScaLAPACK/","title":"ScaLAPACK","text":"

The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers.

https://www.netlib.org/scalapack/

"},{"location":"available_software/detail/ScaLAPACK/#available-modules","title":"Available modules","text":"

The overview below shows which ScaLAPACK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ScaLAPACK, load one of these modules using a module load command like:

module load ScaLAPACK/2.2.0-gompi-2023b-fb\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ScaLAPACK/2.2.0-gompi-2023b-fb x x x x x x x x x ScaLAPACK/2.2.0-gompi-2023a-fb x x x x x x x x x ScaLAPACK/2.2.0-gompi-2022b-fb x x x x x x - x x"},{"location":"available_software/detail/SciPy-bundle/","title":"SciPy-bundle","text":"

Bundle of Python packages for scientific software

https://python.org/

"},{"location":"available_software/detail/SciPy-bundle/#available-modules","title":"Available modules","text":"

The overview below shows which SciPy-bundle installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SciPy-bundle, load one of these modules using a module load command like:

module load SciPy-bundle/2023.11-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciPy-bundle/2023.11-gfbf-2023b x x x x x x x x x SciPy-bundle/2023.07-gfbf-2023a x x x x x x x x x SciPy-bundle/2023.02-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202311-gfbf-2023b","title":"SciPy-bundle/2023.11-gfbf-2023b","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.1, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.7, numpy-1.26.2, pandas-2.1.3, ply-3.11, pythran-0.14.0, scipy-1.11.4, tzdata-2023.3, versioneer-0.29

"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202307-gfbf-2023a","title":"SciPy-bundle/2023.07-gfbf-2023a","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.7, deap-1.4.0, gast-0.5.4, mpmath-1.3.0, numexpr-2.8.4, numpy-1.25.1, pandas-2.0.3, ply-3.11, pythran-0.13.1, scipy-1.11.1, tzdata-2023.3, versioneer-0.29

"},{"location":"available_software/detail/SciPy-bundle/#scipy-bundle202302-gfbf-2022b","title":"SciPy-bundle/2023.02-gfbf-2022b","text":"

This is a list of extensions included in the module:

beniget-0.4.1, Bottleneck-1.3.5, deap-1.3.3, gast-0.5.3, mpmath-1.2.1, numexpr-2.8.4, numpy-1.24.2, pandas-1.5.3, ply-3.11, pythran-0.12.1, scipy-1.10.1

"},{"location":"available_software/detail/SciTools-Iris/","title":"SciTools-Iris","text":"

A powerful, format-agnostic, community-driven Python package for analysing andvisualising Earth science data.

https://scitools-iris.readthedocs.io

"},{"location":"available_software/detail/SciTools-Iris/#available-modules","title":"Available modules","text":"

The overview below shows which SciTools-Iris installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SciTools-Iris, load one of these modules using a module load command like:

module load SciTools-Iris/3.9.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SciTools-Iris/3.9.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SciTools-Iris/#scitools-iris390-foss-2023a","title":"SciTools-Iris/3.9.0-foss-2023a","text":"

This is a list of extensions included in the module:

antlr4-python3-runtime-4.7.2, cf-units-3.2.0, scitools_iris-3.9.0

"},{"location":"available_software/detail/Score-P/","title":"Score-P","text":"

The Score-P measurement infrastructure is a highly scalable and easy-to-use tool suite for profiling, event tracing, and online analysis of HPC applications.

https://www.score-p.org

"},{"location":"available_software/detail/Score-P/#available-modules","title":"Available modules","text":"

The overview below shows which Score-P installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Score-P, load one of these modules using a module load command like:

module load Score-P/8.4-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Score-P/8.4-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/Seaborn/","title":"Seaborn","text":"

Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.

https://seaborn.pydata.org/

"},{"location":"available_software/detail/Seaborn/#available-modules","title":"Available modules","text":"

The overview below shows which Seaborn installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Seaborn, load one of these modules using a module load command like:

module load Seaborn/0.13.2-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Seaborn/0.13.2-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/Shapely/","title":"Shapely","text":"

Shapely is a BSD-licensed Python package for manipulation and analysis of planar geometric objects.It is based on the widely deployed GEOS (the engine of PostGIS) and JTS (from which GEOS is ported) libraries.

https://github.com/Toblerity/Shapely

"},{"location":"available_software/detail/Shapely/#available-modules","title":"Available modules","text":"

The overview below shows which Shapely installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Shapely, load one of these modules using a module load command like:

module load Shapely/2.0.1-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Shapely/2.0.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/SlurmViewer/","title":"SlurmViewer","text":"

View the status of a Slurm cluster, including nodes and queue.

https://gitlab.com/lkeb/slurm_viewer

"},{"location":"available_software/detail/SlurmViewer/#available-modules","title":"Available modules","text":"

The overview below shows which SlurmViewer installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SlurmViewer, load one of these modules using a module load command like:

module load SlurmViewer/1.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SlurmViewer/1.0.1-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/SlurmViewer/#slurmviewer101-gcccore-1320","title":"SlurmViewer/1.0.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

asyncssh-2.18.0, plotext-5.2.8, slurm-viewer-1.0.1, textual-0.85.2, textual-plotext-0.2.1

"},{"location":"available_software/detail/Solids4foam/","title":"Solids4foam","text":"

A toolbox for performing solid mechanics and fluid-solid interactions in OpenFOAM.

https://www.solids4foam.com/

"},{"location":"available_software/detail/Solids4foam/#available-modules","title":"Available modules","text":"

The overview below shows which Solids4foam installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Solids4foam, load one of these modules using a module load command like:

module load Solids4foam/2.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Solids4foam/2.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SuiteSparse/","title":"SuiteSparse","text":"

SuiteSparse is a collection of libraries to manipulate sparse matrices.

https://faculty.cse.tamu.edu/davis/suitesparse.html

"},{"location":"available_software/detail/SuiteSparse/#available-modules","title":"Available modules","text":"

The overview below shows which SuiteSparse installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SuiteSparse, load one of these modules using a module load command like:

module load SuiteSparse/7.1.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SuiteSparse/7.1.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/SuperLU_DIST/","title":"SuperLU_DIST","text":"

SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines.

https://crd-legacy.lbl.gov/~xiaoye/SuperLU/

"},{"location":"available_software/detail/SuperLU_DIST/#available-modules","title":"Available modules","text":"

The overview below shows which SuperLU_DIST installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using SuperLU_DIST, load one of these modules using a module load command like:

module load SuperLU_DIST/8.1.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 SuperLU_DIST/8.1.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Szip/","title":"Szip","text":"

Szip compression software, providing lossless compression of scientific data

https://www.hdfgroup.org/doc_resource/SZIP/

"},{"location":"available_software/detail/Szip/#available-modules","title":"Available modules","text":"

The overview below shows which Szip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Szip, load one of these modules using a module load command like:

module load Szip/2.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Szip/2.1.1-GCCcore-13.2.0 x x x x x x x x x Szip/2.1.1-GCCcore-12.3.0 x x x x x x x x x Szip/2.1.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tcl/","title":"Tcl","text":"

Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more.

https://www.tcl.tk/

"},{"location":"available_software/detail/Tcl/#available-modules","title":"Available modules","text":"

The overview below shows which Tcl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tcl, load one of these modules using a module load command like:

module load Tcl/8.6.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tcl/8.6.13-GCCcore-13.2.0 x x x x x x x x x Tcl/8.6.13-GCCcore-12.3.0 x x x x x x x x x Tcl/8.6.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/TensorFlow/","title":"TensorFlow","text":"

An open-source software library for Machine Intelligence

https://www.tensorflow.org/

"},{"location":"available_software/detail/TensorFlow/#available-modules","title":"Available modules","text":"

The overview below shows which TensorFlow installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using TensorFlow, load one of these modules using a module load command like:

module load TensorFlow/2.13.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 TensorFlow/2.13.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/TensorFlow/#tensorflow2130-foss-2023a","title":"TensorFlow/2.13.0-foss-2023a","text":"

This is a list of extensions included in the module:

absl-py-1.4.0, astor-0.8.1, astunparse-1.6.3, cachetools-5.3.1, google-auth-2.22.0, google-auth-oauthlib-1.0.0, google-pasta-0.2.0, grpcio-1.57.0, gviz-api-1.10.0, keras-2.13.1, Markdown-3.4.4, oauthlib-3.2.2, opt-einsum-3.3.0, portpicker-1.5.2, pyasn1-modules-0.3.0, requests-oauthlib-1.3.1, rsa-4.9, tblib-2.0.0, tensorboard-2.13.0, tensorboard-data-server-0.7.1, tensorboard-plugin-profile-2.13.1, tensorboard-plugin-wit-1.8.1, TensorFlow-2.13.0, tensorflow-estimator-2.13.0, termcolor-2.3.0, Werkzeug-2.3.7, wrapt-1.15.0

"},{"location":"available_software/detail/Tk/","title":"Tk","text":"

Tk is an open source, cross-platform widget toolchain that provides a library of basic elements for building a graphical user interface (GUI) in many different programming languages.

https://www.tcl.tk/

"},{"location":"available_software/detail/Tk/#available-modules","title":"Available modules","text":"

The overview below shows which Tk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tk, load one of these modules using a module load command like:

module load Tk/8.6.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tk/8.6.13-GCCcore-13.2.0 x x x x x x x x x Tk/8.6.13-GCCcore-12.3.0 x x x x x x x x x Tk/8.6.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tkinter/","title":"Tkinter","text":"

Tkinter module, built with the Python buildsystem

https://python.org/

"},{"location":"available_software/detail/Tkinter/#available-modules","title":"Available modules","text":"

The overview below shows which Tkinter installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tkinter, load one of these modules using a module load command like:

module load Tkinter/3.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tkinter/3.11.5-GCCcore-13.2.0 x x x x x x x x x Tkinter/3.11.3-GCCcore-12.3.0 x x x x x x x x x Tkinter/3.10.8-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Tombo/","title":"Tombo","text":"

Tombo is a suite of tools primarily for the identification of modified nucleotides from raw nanopore sequencing data.

https://github.com/nanoporetech/tombo

"},{"location":"available_software/detail/Tombo/#available-modules","title":"Available modules","text":"

The overview below shows which Tombo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Tombo, load one of these modules using a module load command like:

module load Tombo/1.5.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Tombo/1.5.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Tombo/#tombo151-foss-2023a","title":"Tombo/1.5.1-foss-2023a","text":"

This is a list of extensions included in the module:

mappy-2.28, ont-tombo-1.5.1, pyfaidx-0.5.8

"},{"location":"available_software/detail/Transrate/","title":"Transrate","text":"

Transrate is software for de-novo transcriptome assembly quality analysis. It examines your assembly in detail and compares it to experimental evidence such as the sequencing reads, reporting quality scores for contigs and assemblies. This allows you to choose between assemblers and parameters, filter out the bad contigs from an assembly, and help decide when to stop trying to improve the assembly.

https://hibberdlab.com/transrate

"},{"location":"available_software/detail/Transrate/#available-modules","title":"Available modules","text":"

The overview below shows which Transrate installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Transrate, load one of these modules using a module load command like:

module load Transrate/1.0.3-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Transrate/1.0.3-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/UCC-CUDA/","title":"UCC-CUDA","text":"

UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.This module adds the UCC CUDA support.

https://www.openucx.org/

"},{"location":"available_software/detail/UCC-CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which UCC-CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCC-CUDA, load one of these modules using a module load command like:

module load UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC-CUDA/1.2.0-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/UCC/","title":"UCC","text":"

UCC (Unified Collective Communication) is a collectivecommunication operations API and library that is flexible, complete, and feature-rich for current and emerging programming models and runtimes.

https://www.openucx.org/

"},{"location":"available_software/detail/UCC/#available-modules","title":"Available modules","text":"

The overview below shows which UCC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCC, load one of these modules using a module load command like:

module load UCC/1.2.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCC/1.2.0-GCCcore-13.2.0 x x x x x x x x x UCC/1.2.0-GCCcore-12.3.0 x x x x x x x x x UCC/1.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UCX-CUDA/","title":"UCX-CUDA","text":"

Unified Communication XAn open-source production grade communication framework for data centricand high-performance applicationsThis module adds the UCX CUDA support.

http://www.openucx.org/

"},{"location":"available_software/detail/UCX-CUDA/#available-modules","title":"Available modules","text":"

The overview below shows which UCX-CUDA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCX-CUDA, load one of these modules using a module load command like:

module load UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX-CUDA/1.14.1-GCCcore-12.3.0-CUDA-12.1.1 x x x x x x - x x"},{"location":"available_software/detail/UCX/","title":"UCX","text":"

Unified Communication XAn open-source production grade communication framework for data centricand high-performance applications

https://www.openucx.org/

"},{"location":"available_software/detail/UCX/#available-modules","title":"Available modules","text":"

The overview below shows which UCX installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UCX, load one of these modules using a module load command like:

module load UCX/1.15.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UCX/1.15.0-GCCcore-13.2.0 x x x x x x x x x UCX/1.14.1-GCCcore-12.3.0 x x x x x x x x x UCX/1.13.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UDUNITS/","title":"UDUNITS","text":"

UDUNITS supports conversion of unit specifications between formatted and binary forms, arithmetic manipulation of units, and conversion of values between compatible scales of measurement.

https://www.unidata.ucar.edu/software/udunits/

"},{"location":"available_software/detail/UDUNITS/#available-modules","title":"Available modules","text":"

The overview below shows which UDUNITS installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UDUNITS, load one of these modules using a module load command like:

module load UDUNITS/2.2.28-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UDUNITS/2.2.28-GCCcore-13.2.0 x x x x x x x x x UDUNITS/2.2.28-GCCcore-12.3.0 x x x x x x x x x UDUNITS/2.2.28-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/UnZip/","title":"UnZip","text":"

UnZip is an extraction utility for archives compressedin .zip format (also called \"zipfiles\"). Although highly compatible bothwith PKWARE's PKZIP and PKUNZIP utilities for MS-DOS and with Info-ZIP'sown Zip program, our primary objectives have been portability andnon-MSDOS functionality.

http://www.info-zip.org/UnZip.html

"},{"location":"available_software/detail/UnZip/#available-modules","title":"Available modules","text":"

The overview below shows which UnZip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using UnZip, load one of these modules using a module load command like:

module load UnZip/6.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 UnZip/6.0-GCCcore-13.2.0 x x x x x x x x x UnZip/6.0-GCCcore-12.3.0 x x x x x x x x x UnZip/6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/VCFtools/","title":"VCFtools","text":"

The aim of VCFtools is to provide easily accessible methods for working with complex genetic variation data in the form of VCF files.

https://vcftools.github.io

"},{"location":"available_software/detail/VCFtools/#available-modules","title":"Available modules","text":"

The overview below shows which VCFtools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using VCFtools, load one of these modules using a module load command like:

module load VCFtools/0.1.16-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 VCFtools/0.1.16-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/VTK/","title":"VTK","text":"

The Visualization Toolkit (VTK) is an open-source, freely available software system for 3D computer graphics, image processing and visualization. VTK consists of a C++ class library and several interpreted interface layers including Tcl/Tk, Java, and Python. VTK supports a wide variety of visualization algorithms including: scalar, vector, tensor, texture, and volumetric methods; and advanced modeling techniques such as: implicit modeling, polygon reduction, mesh smoothing, cutting, contouring, and Delaunay triangulation.

https://www.vtk.org

"},{"location":"available_software/detail/VTK/#available-modules","title":"Available modules","text":"

The overview below shows which VTK installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using VTK, load one of these modules using a module load command like:

module load VTK/9.3.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 VTK/9.3.0-foss-2023b x x x x x x x x x VTK/9.3.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/Valgrind/","title":"Valgrind","text":"

Valgrind: Debugging and profiling tools

https://valgrind.org

"},{"location":"available_software/detail/Valgrind/#available-modules","title":"Available modules","text":"

The overview below shows which Valgrind installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Valgrind, load one of these modules using a module load command like:

module load Valgrind/3.23.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Valgrind/3.23.0-gompi-2023b x x x x x x x x x Valgrind/3.21.0-gompi-2023a x x x x x x x x x Valgrind/3.21.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/Vim/","title":"Vim","text":"

Vim is an advanced text editor that seeks to provide the power of the de-facto Unix editor 'Vi', with a more complete feature set.

http://www.vim.org

"},{"location":"available_software/detail/Vim/#available-modules","title":"Available modules","text":"

The overview below shows which Vim installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Vim, load one of these modules using a module load command like:

module load Vim/9.1.0004-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Vim/9.1.0004-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Voro%2B%2B/","title":"Voro++","text":"

Voro++ is a software library for carrying out three-dimensional computations of the Voronoitessellation. A distinguishing feature of the Voro++ library is that it carries out cell-based calculations,computing the Voronoi cell for each particle individually. It is particularly well-suited for applications thatrely on cell-based statistics, where features of Voronoi cells (eg. volume, centroid, number of faces) can be usedto analyze a system of particles.

http://math.lbl.gov/voro++/

"},{"location":"available_software/detail/Voro%2B%2B/#available-modules","title":"Available modules","text":"

The overview below shows which Voro++ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Voro++, load one of these modules using a module load command like:

module load Voro++/0.4.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Voro++/0.4.6-GCCcore-13.2.0 x x x x x x x x x Voro++/0.4.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/WCSLIB/","title":"WCSLIB","text":"

The FITS \"World Coordinate System\" (WCS) standard defines keywordsand usage that provide for the description of astronomical coordinate systems in aFITS image header.

https://www.atnf.csiro.au/people/mcalabre/WCS/

"},{"location":"available_software/detail/WCSLIB/#available-modules","title":"Available modules","text":"

The overview below shows which WCSLIB installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WCSLIB, load one of these modules using a module load command like:

module load WCSLIB/7.11-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WCSLIB/7.11-GCC-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/WRF/","title":"WRF","text":"

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs.

https://www.wrf-model.org

"},{"location":"available_software/detail/WRF/#available-modules","title":"Available modules","text":"

The overview below shows which WRF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WRF, load one of these modules using a module load command like:

module load WRF/4.4.1-foss-2022b-dmpar\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WRF/4.4.1-foss-2022b-dmpar x x x x x x - x x"},{"location":"available_software/detail/WSClean/","title":"WSClean","text":"

WSClean (w-stacking clean) is a fast generic widefield imager.It implements several gridding algorithms and offers fully-automated multi-scalemulti-frequency deconvolution.

https://wsclean.readthedocs.io/

"},{"location":"available_software/detail/WSClean/#available-modules","title":"Available modules","text":"

The overview below shows which WSClean installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WSClean, load one of these modules using a module load command like:

module load WSClean/3.4-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WSClean/3.4-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/Wayland/","title":"Wayland","text":"

Wayland is a project to define a protocol for a compositor to talk to its clients as well as a library implementation of the protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.

https://wayland.freedesktop.org/

"},{"location":"available_software/detail/Wayland/#available-modules","title":"Available modules","text":"

The overview below shows which Wayland installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Wayland, load one of these modules using a module load command like:

module load Wayland/1.22.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Wayland/1.22.0-GCCcore-13.2.0 x x x x x x x x x Wayland/1.22.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/WhatsHap/","title":"WhatsHap","text":"

WhatsHap is a software for phasing genomic variants using DNAsequencing reads, also called read-based phasing or haplotype assembly. It isespecially suitable for long reads, but works also well with short reads.

https://whatshap.readthedocs.io

"},{"location":"available_software/detail/WhatsHap/#available-modules","title":"Available modules","text":"

The overview below shows which WhatsHap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using WhatsHap, load one of these modules using a module load command like:

module load WhatsHap/2.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 WhatsHap/2.2-foss-2023a x x x x x x x x x WhatsHap/2.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/WhatsHap/#whatshap22-foss-2023a","title":"WhatsHap/2.2-foss-2023a","text":"

This is a list of extensions included in the module:

PuLP-2.8.0, whatshap-2.2, xopen-1.7.0

"},{"location":"available_software/detail/WhatsHap/#whatshap21-foss-2022b","title":"WhatsHap/2.1-foss-2022b","text":"

This is a list of extensions included in the module:

pulp-2.8.0, WhatsHap-2.1, xopen-1.7.0

"},{"location":"available_software/detail/X11/","title":"X11","text":"

The X Window System (X11) is a windowing system for bitmap displays

https://www.x.org

"},{"location":"available_software/detail/X11/#available-modules","title":"Available modules","text":"

The overview below shows which X11 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using X11, load one of these modules using a module load command like:

module load X11/20231019-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 X11/20231019-GCCcore-13.2.0 x x x x x x x x x X11/20230603-GCCcore-12.3.0 x x x x x x x x x X11/20221110-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/XML-LibXML/","title":"XML-LibXML","text":"

Perl binding for libxml2

https://metacpan.org/pod/distribution/XML-LibXML/LibXML.pod

"},{"location":"available_software/detail/XML-LibXML/#available-modules","title":"Available modules","text":"

The overview below shows which XML-LibXML installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using XML-LibXML, load one of these modules using a module load command like:

module load XML-LibXML/2.0209-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 XML-LibXML/2.0209-GCCcore-12.3.0 x x x x x x x x x XML-LibXML/2.0208-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/XML-LibXML/#xml-libxml20209-gcccore-1230","title":"XML-LibXML/2.0209-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

Alien::Base-2.80, Alien::Build::Plugin::Download::GitLab-0.01, Alien::Libxml2-0.19, File::chdir-0.1011, XML::LibXML-2.0209

"},{"location":"available_software/detail/XML-LibXML/#xml-libxml20208-gcccore-1220","title":"XML-LibXML/2.0208-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

Alien::Base-2.80, Alien::Build::Plugin::Download::GitLab-0.01, Alien::Libxml2-0.19, File::chdir-0.1011, XML::LibXML-2.0208

"},{"location":"available_software/detail/Xerces-C%2B%2B/","title":"Xerces-C++","text":"

Xerces-C++ is a validating XML parser written in a portablesubset of C++. Xerces-C++ makes it easy to give your application the ability toread and write XML data. A shared library is provided for parsing, generating,manipulating, and validating XML documents using the DOM, SAX, and SAX2APIs.

https://xerces.apache.org/xerces-c/

"},{"location":"available_software/detail/Xerces-C%2B%2B/#available-modules","title":"Available modules","text":"

The overview below shows which Xerces-C++ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Xerces-C++, load one of these modules using a module load command like:

module load Xerces-C++/3.2.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xerces-C++/3.2.5-GCCcore-13.2.0 x x x x x x x x x Xerces-C++/3.2.4-GCCcore-12.3.0 x x x x x x x x x Xerces-C++/3.2.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Xvfb/","title":"Xvfb","text":"

Xvfb is an X server that can run on machines with no display hardware and no physical input devices. It emulates a dumb framebuffer using virtual memory.

https://www.x.org/releases/X11R7.6/doc/man/man1/Xvfb.1.xhtml

"},{"location":"available_software/detail/Xvfb/#available-modules","title":"Available modules","text":"

The overview below shows which Xvfb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Xvfb, load one of these modules using a module load command like:

module load Xvfb/21.1.9-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Xvfb/21.1.9-GCCcore-13.2.0 x x x x x x x x x Xvfb/21.1.8-GCCcore-12.3.0 x x x x x x x x x Xvfb/21.1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/YODA/","title":"YODA","text":"

Yet more Objects for (High Energy Physics) Data Analysis

https://yoda.hepforge.org/

"},{"location":"available_software/detail/YODA/#available-modules","title":"Available modules","text":"

The overview below shows which YODA installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using YODA, load one of these modules using a module load command like:

module load YODA/1.9.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 YODA/1.9.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Yasm/","title":"Yasm","text":"

Yasm: Complete rewrite of the NASM assembler with BSD license

https://www.tortall.net/projects/yasm/

"},{"location":"available_software/detail/Yasm/#available-modules","title":"Available modules","text":"

The overview below shows which Yasm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Yasm, load one of these modules using a module load command like:

module load Yasm/1.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Yasm/1.3.0-GCCcore-13.2.0 - - - x x x x x x Yasm/1.3.0-GCCcore-12.3.0 - - - x x x x x x Yasm/1.3.0-GCCcore-12.2.0 - - - x x x - x x"},{"location":"available_software/detail/Z3/","title":"Z3","text":"

Z3 is a theorem prover from Microsoft Research with support for bitvectors,booleans, arrays, floating point numbers, strings, and other data types. Thismodule includes z3-solver, the Python interface of Z3.

https://github.com/Z3Prover/z3

"},{"location":"available_software/detail/Z3/#available-modules","title":"Available modules","text":"

The overview below shows which Z3 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Z3, load one of these modules using a module load command like:

module load Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3 x x x x x x - x x Z3/4.12.2-GCCcore-12.3.0 x x x x x x x x x Z3/4.12.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230-python-3113","title":"Z3/4.12.2-GCCcore-12.3.0-Python-3.11.3","text":"

This is a list of extensions included in the module:

z3-solver-4.12.2.0

"},{"location":"available_software/detail/Z3/#z34122-gcccore-1230","title":"Z3/4.12.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

z3-solver-4.12.2.0

"},{"location":"available_software/detail/ZeroMQ/","title":"ZeroMQ","text":"

ZeroMQ looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply. It's fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems.

https://www.zeromq.org/

"},{"location":"available_software/detail/ZeroMQ/#available-modules","title":"Available modules","text":"

The overview below shows which ZeroMQ installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ZeroMQ, load one of these modules using a module load command like:

module load ZeroMQ/4.3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ZeroMQ/4.3.5-GCCcore-13.2.0 x x x x x x x x x ZeroMQ/4.3.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/Zip/","title":"Zip","text":"

Zip is a compression and file packaging/archive utility.Although highly compatible both with PKWARE's PKZIP and PKUNZIPutilities for MS-DOS and with Info-ZIP's own UnZip, our primary objectiveshave been portability and other-than-MSDOS functionality

http://www.info-zip.org/Zip.html

"},{"location":"available_software/detail/Zip/#available-modules","title":"Available modules","text":"

The overview below shows which Zip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using Zip, load one of these modules using a module load command like:

module load Zip/3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 Zip/3.0-GCCcore-12.3.0 x x x x x x x x x Zip/3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/amdahl/","title":"amdahl","text":"

This Python module contains a pseudo-application that can be used as a blackbox to reproduce Amdahl's Law. It does not do real calculations, nor any realcommunication, so can easily be overloaded.

https://github.com/hpc-carpentry/amdahl

"},{"location":"available_software/detail/amdahl/#available-modules","title":"Available modules","text":"

The overview below shows which amdahl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using amdahl, load one of these modules using a module load command like:

module load amdahl/0.3.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 amdahl/0.3.1-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/archspec/","title":"archspec","text":"

A library for detecting, labeling, and reasoning about microarchitectures

https://github.com/archspec/archspec

"},{"location":"available_software/detail/archspec/#available-modules","title":"Available modules","text":"

The overview below shows which archspec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using archspec, load one of these modules using a module load command like:

module load archspec/0.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 archspec/0.2.2-GCCcore-13.2.0 x x x x x x x x x archspec/0.2.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/arpack-ng/","title":"arpack-ng","text":"

ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems.

https://github.com/opencollab/arpack-ng

"},{"location":"available_software/detail/arpack-ng/#available-modules","title":"Available modules","text":"

The overview below shows which arpack-ng installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using arpack-ng, load one of these modules using a module load command like:

module load arpack-ng/3.9.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 arpack-ng/3.9.0-foss-2023b x x x x x x x x x arpack-ng/3.9.0-foss-2023a x x x x x x x x x arpack-ng/3.8.0-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/arrow-R/","title":"arrow-R","text":"

R interface to the Apache Arrow C++ library

https://cran.r-project.org/web/packages/arrow

"},{"location":"available_software/detail/arrow-R/#available-modules","title":"Available modules","text":"

The overview below shows which arrow-R installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using arrow-R, load one of these modules using a module load command like:

module load arrow-R/14.0.1-foss-2023a-R-4.3.2\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 arrow-R/14.0.1-foss-2023a-R-4.3.2 x x x x x x x x x arrow-R/11.0.0.3-foss-2022b-R-4.2.2 x x x x x x - x x"},{"location":"available_software/detail/at-spi2-atk/","title":"at-spi2-atk","text":"

AT-SPI 2 toolkit bridge

https://wiki.gnome.org/Accessibility

"},{"location":"available_software/detail/at-spi2-atk/#available-modules","title":"Available modules","text":"

The overview below shows which at-spi2-atk installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using at-spi2-atk, load one of these modules using a module load command like:

module load at-spi2-atk/2.38.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-atk/2.38.0-GCCcore-13.2.0 x x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.3.0 x x x x x x x x x at-spi2-atk/2.38.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/at-spi2-core/","title":"at-spi2-core","text":"

Assistive Technology Service Provider Interface.

https://wiki.gnome.org/Accessibility

"},{"location":"available_software/detail/at-spi2-core/#available-modules","title":"Available modules","text":"

The overview below shows which at-spi2-core installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using at-spi2-core, load one of these modules using a module load command like:

module load at-spi2-core/2.50.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 at-spi2-core/2.50.0-GCCcore-13.2.0 x x x x x x x x x at-spi2-core/2.49.91-GCCcore-12.3.0 x x x x x x x x x at-spi2-core/2.46.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/basemap/","title":"basemap","text":"

The matplotlib basemap toolkit is a library for plotting 2D data on maps in Python

https://matplotlib.org/basemap/

"},{"location":"available_software/detail/basemap/#available-modules","title":"Available modules","text":"

The overview below shows which basemap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using basemap, load one of these modules using a module load command like:

module load basemap/1.3.9-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 basemap/1.3.9-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/basemap/#basemap139-foss-2023a","title":"basemap/1.3.9-foss-2023a","text":"

This is a list of extensions included in the module:

basemap-1.3.9, basemap_data-1.3.9, pyshp-2.3.1

"},{"location":"available_software/detail/bokeh/","title":"bokeh","text":"

Statistical and novel interactive HTML plots for Python

https://github.com/bokeh/bokeh

"},{"location":"available_software/detail/bokeh/#available-modules","title":"Available modules","text":"

The overview below shows which bokeh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using bokeh, load one of these modules using a module load command like:

module load bokeh/3.2.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 bokeh/3.2.2-foss-2023a x x x x x x x x x bokeh/3.2.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/bokeh/#bokeh322-foss-2023a","title":"bokeh/3.2.2-foss-2023a","text":"

This is a list of extensions included in the module:

bokeh-3.2.2, contourpy-1.0.7, xyzservices-2023.7.0

"},{"location":"available_software/detail/bokeh/#bokeh321-foss-2022b","title":"bokeh/3.2.1-foss-2022b","text":"

This is a list of extensions included in the module:

bokeh-3.2.1, contourpy-1.0.7, tornado-6.3.2, xyzservices-2023.7.0

"},{"location":"available_software/detail/cURL/","title":"cURL","text":"

libcurl is a free and easy-to-use client-side URL transfer library, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. libcurl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, Kerberos), file transfer resume, http proxy tunneling and more.

https://curl.haxx.se

"},{"location":"available_software/detail/cURL/#available-modules","title":"Available modules","text":"

The overview below shows which cURL installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cURL, load one of these modules using a module load command like:

module load cURL/8.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cURL/8.3.0-GCCcore-13.2.0 x x x x x x x x x cURL/8.0.1-GCCcore-12.3.0 x x x x x x x x x cURL/7.86.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/cairo/","title":"cairo","text":"

Cairo is a 2D graphics library with support for multiple output devices. Currently supported output targets include the X Window System (via both Xlib and XCB), Quartz, Win32, image buffers, PostScript, PDF, and SVG file output. Experimental backends include OpenGL, BeOS, OS/2, and DirectFB

https://cairographics.org

"},{"location":"available_software/detail/cairo/#available-modules","title":"Available modules","text":"

The overview below shows which cairo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cairo, load one of these modules using a module load command like:

module load cairo/1.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cairo/1.18.0-GCCcore-13.2.0 x x x x x x x x x cairo/1.17.8-GCCcore-12.3.0 x x x x x x x x x cairo/1.17.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/casacore/","title":"casacore","text":"

A suite of C++ libraries for radio astronomy data processing.The ephemerides data needs to be in DATA_DIR and the location must be specified at runtime.Thus user's can update them.

https://github.com/casacore/casacore

"},{"location":"available_software/detail/casacore/#available-modules","title":"Available modules","text":"

The overview below shows which casacore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using casacore, load one of these modules using a module load command like:

module load casacore/3.5.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 casacore/3.5.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/ccache/","title":"ccache","text":"

Ccache (or \u201cccache\u201d) is a compiler cache. It speeds up recompilation bycaching previous compilations and detecting when the same compilation is being done again

https://ccache.dev/

"},{"location":"available_software/detail/ccache/#available-modules","title":"Available modules","text":"

The overview below shows which ccache installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ccache, load one of these modules using a module load command like:

module load ccache/4.9-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ccache/4.9-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cffi/","title":"cffi","text":"

C Foreign Function Interface for Python. Interact with almost any C code fromPython, based on C-like declarations that you can often copy-paste from headerfiles or documentation.

https://cffi.readthedocs.io/en/latest/

"},{"location":"available_software/detail/cffi/#available-modules","title":"Available modules","text":"

The overview below shows which cffi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cffi, load one of these modules using a module load command like:

module load cffi/1.15.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cffi/1.15.1-GCCcore-13.2.0 x x x x x x x x x cffi/1.15.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1320","title":"cffi/1.15.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

cffi-1.15.1, pycparser-2.21

"},{"location":"available_software/detail/cffi/#cffi1151-gcccore-1230","title":"cffi/1.15.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

cffi-1.15.1, pycparser-2.21

"},{"location":"available_software/detail/cimfomfa/","title":"cimfomfa","text":"

This library supports both MCL, a cluster algorithm for graphs, and zoem, amacro/DSL language. It supplies abstractions for memory management, I/O,associative arrays, strings, heaps, and a few other things. The string libraryhas had heavy testing as part of zoem. Both understandably and regrettably Ichose long ago to make it C-string-compatible, hence nul bytes may not be partof a string. At some point I hope to rectify this, perhaps unrealistically.

https://github.com/micans/cimfomfa

"},{"location":"available_software/detail/cimfomfa/#available-modules","title":"Available modules","text":"

The overview below shows which cimfomfa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cimfomfa, load one of these modules using a module load command like:

module load cimfomfa/22.273-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cimfomfa/22.273-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/colorize/","title":"colorize","text":"

Ruby gem for colorizing text using ANSI escape sequences.Extends String class or add a ColorizedString with methods to set the text color, background color and text effects.

https://github.com/fazibear/colorize

"},{"location":"available_software/detail/colorize/#available-modules","title":"Available modules","text":"

The overview below shows which colorize installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using colorize, load one of these modules using a module load command like:

module load colorize/0.7.7-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 colorize/0.7.7-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/cooler/","title":"cooler","text":"

Cooler is a support library for a storage format, also called cooler, used to store genomic interaction data of any size, such as Hi-C contact matrices.

https://open2c.github.io/cooler

"},{"location":"available_software/detail/cooler/#available-modules","title":"Available modules","text":"

The overview below shows which cooler installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cooler, load one of these modules using a module load command like:

module load cooler/0.10.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cooler/0.10.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/cooler/#cooler0102-foss-2023b","title":"cooler/0.10.2-foss-2023b","text":"

This is a list of extensions included in the module:

asciitree-0.3.3, cooler-0.10.2, cytoolz-1.0.0, toolz-1.0.0

"},{"location":"available_software/detail/cpio/","title":"cpio","text":"

The cpio package contains tools for archiving.

https://savannah.gnu.org/projects/cpio/

"},{"location":"available_software/detail/cpio/#available-modules","title":"Available modules","text":"

The overview below shows which cpio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cpio, load one of these modules using a module load command like:

module load cpio/2.15-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cpio/2.15-GCCcore-12.3.0 x x x x x x x x x cpio/2.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/cppy/","title":"cppy","text":"

A small C++ header library which makes it easier to writePython extension modules. The primary feature is a PyObject smart pointerwhich automatically handles reference counting and provides conveniencemethods for performing common object operations.

https://github.com/nucleic/cppy

"},{"location":"available_software/detail/cppy/#available-modules","title":"Available modules","text":"

The overview below shows which cppy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cppy, load one of these modules using a module load command like:

module load cppy/1.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cppy/1.2.1-GCCcore-13.2.0 x x x x x x x x x cppy/1.2.1-GCCcore-12.3.0 x x x x x x x x x cppy/1.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/crb-blast/","title":"crb-blast","text":"

Conditional Reciprocal Best BLAST - high confidence ortholog assignment.

https://github.com/cboursnell/crb-blast

"},{"location":"available_software/detail/crb-blast/#available-modules","title":"Available modules","text":"

The overview below shows which crb-blast installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using crb-blast, load one of these modules using a module load command like:

module load crb-blast/0.6.9-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 crb-blast/0.6.9-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/crb-blast/#crb-blast069-gcc-1230","title":"crb-blast/0.6.9-GCC-12.3.0","text":"

This is a list of extensions included in the module:

bindeps-1.2.1, bio-1.6.0.pre.20181210, crb-blast-0.6.9, facade-1.2.1, fixwhich-1.0.2, pathname2-1.8.4, threach-0.2.0, trollop-2.9.10

"},{"location":"available_software/detail/cryptography/","title":"cryptography","text":"

cryptography is a package designed to expose cryptographic primitives and recipes to Python developers.

https://github.com/pyca/cryptography

"},{"location":"available_software/detail/cryptography/#available-modules","title":"Available modules","text":"

The overview below shows which cryptography installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using cryptography, load one of these modules using a module load command like:

module load cryptography/41.0.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 cryptography/41.0.5-GCCcore-13.2.0 x x x x x x x x x cryptography/41.0.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/dask/","title":"dask","text":"

Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love.

https://dask.org/

"},{"location":"available_software/detail/dask/#available-modules","title":"Available modules","text":"

The overview below shows which dask installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dask, load one of these modules using a module load command like:

module load dask/2023.9.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dask/2023.9.2-foss-2023a x x x x x x x x x dask/2023.7.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/dask/#dask202392-foss-2023a","title":"dask/2023.9.2-foss-2023a","text":"

This is a list of extensions included in the module:

dask-2023.9.2, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.9.2, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, zict-3.0.0

"},{"location":"available_software/detail/dask/#dask202371-foss-2022b","title":"dask/2023.7.1-foss-2022b","text":"

This is a list of extensions included in the module:

dask-2023.7.1, dask-jobqueue-0.8.2, dask-mpi-2022.4.0, distributed-2023.7.1, docrep-0.3.2, HeapDict-1.0.1, locket-1.0.0, partd-1.4.0, tblib-2.0.0, toolz-0.12.0, versioneer-0.29, zict-3.0.0

"},{"location":"available_software/detail/dill/","title":"dill","text":"

dill extends python's pickle module for serializing and de-serializing python objects to the majority of the built-in python types. Serialization is the process of converting an object to a byte stream, and the inverse of which is converting a byte stream back to on python object hierarchy.

https://pypi.org/project/dill/

"},{"location":"available_software/detail/dill/#available-modules","title":"Available modules","text":"

The overview below shows which dill installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dill, load one of these modules using a module load command like:

module load dill/0.3.8-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dill/0.3.8-GCCcore-13.2.0 x x x x x x x x x dill/0.3.7-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/dlb/","title":"dlb","text":"

DLB is a dynamic library designed to speed up HPC hybrid applications (i.e.,two levels of parallelism) by improving the load balance of the outer level ofparallelism (e.g., MPI) by dynamically redistributing the computationalresources at the inner level of parallelism (e.g., OpenMP). at run time.

https://pm.bsc.es/dlb/

"},{"location":"available_software/detail/dlb/#available-modules","title":"Available modules","text":"

The overview below shows which dlb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using dlb, load one of these modules using a module load command like:

module load dlb/3.4-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 dlb/3.4-gompi-2023b x x x x x x x x x"},{"location":"available_software/detail/double-conversion/","title":"double-conversion","text":"

Efficient binary-decimal and decimal-binary conversion routines for IEEE doubles.

https://github.com/google/double-conversion

"},{"location":"available_software/detail/double-conversion/#available-modules","title":"Available modules","text":"

The overview below shows which double-conversion installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using double-conversion, load one of these modules using a module load command like:

module load double-conversion/3.3.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 double-conversion/3.3.0-GCCcore-13.2.0 x x x x x x x x x double-conversion/3.3.0-GCCcore-12.3.0 x x x x x x x x x double-conversion/3.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ecBuild/","title":"ecBuild","text":"

A CMake-based build system, consisting of a collection of CMake macros andfunctions that ease the managing of software build systems

https://ecbuild.readthedocs.io/

"},{"location":"available_software/detail/ecBuild/#available-modules","title":"Available modules","text":"

The overview below shows which ecBuild installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ecBuild, load one of these modules using a module load command like:

module load ecBuild/3.8.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecBuild/3.8.0 x x x x x x x x x"},{"location":"available_software/detail/ecCodes/","title":"ecCodes","text":"

ecCodes is a package developed by ECMWF which provides an application programming interface and a set of tools for decoding and encoding messages in the following formats: WMO FM-92 GRIB edition 1 and edition 2, WMO FM-94 BUFR edition 3 and edition 4, WMO GTS abbreviated header (only decoding).

https://software.ecmwf.int/wiki/display/ECC/ecCodes+Home

"},{"location":"available_software/detail/ecCodes/#available-modules","title":"Available modules","text":"

The overview below shows which ecCodes installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ecCodes, load one of these modules using a module load command like:

module load ecCodes/2.31.0-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ecCodes/2.31.0-gompi-2023b x x x x x x x x x ecCodes/2.31.0-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/elfutils/","title":"elfutils","text":"

The elfutils project provides libraries and tools for ELF files and DWARF data.

https://elfutils.org/

"},{"location":"available_software/detail/elfutils/#available-modules","title":"Available modules","text":"

The overview below shows which elfutils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using elfutils, load one of these modules using a module load command like:

module load elfutils/0.190-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 elfutils/0.190-GCCcore-13.2.0 x x x x x x x x x elfutils/0.189-GCCcore-12.3.0 x x x x x x x x x elfutils/0.189-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/expat/","title":"expat","text":"

Expat is an XML parser library written in C. It is a stream-oriented parserin which an application registers handlers for things the parser might findin the XML document (like start tags).

https://libexpat.github.io

"},{"location":"available_software/detail/expat/#available-modules","title":"Available modules","text":"

The overview below shows which expat installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using expat, load one of these modules using a module load command like:

module load expat/2.5.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 expat/2.5.0-GCCcore-13.2.0 x x x x x x x x x expat/2.5.0-GCCcore-12.3.0 x x x x x x x x x expat/2.4.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/expecttest/","title":"expecttest","text":"

This library implements expect tests (also known as \"golden\" tests). Expect tests are a method of writing tests where instead of hard-coding the expected output of a test, you run the test to get the output, and the test framework automatically populates the expected output. If the output of the test changes, you can rerun the test with the environment variable EXPECTTEST_ACCEPT=1 to automatically update the expected output.

https://github.com/ezyang/expecttest

"},{"location":"available_software/detail/expecttest/#available-modules","title":"Available modules","text":"

The overview below shows which expecttest installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using expecttest, load one of these modules using a module load command like:

module load expecttest/0.1.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 expecttest/0.1.5-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/f90wrap/","title":"f90wrap","text":"

f90wrap is a tool to automatically generate Python extension modules whichinterface to Fortran code that makes use of derived types. It builds on thecapabilities of the popular f2py utility by generating a simpler Fortran 90interface to the original Fortran code which is then suitable for wrapping withf2py, together with a higher-level Pythonic wrapper that makes the existance ofan additional layer transparent to the final user.

https://github.com/jameskermode/f90wrap

"},{"location":"available_software/detail/f90wrap/#available-modules","title":"Available modules","text":"

The overview below shows which f90wrap installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using f90wrap, load one of these modules using a module load command like:

module load f90wrap/0.2.13-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 f90wrap/0.2.13-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/fastjet-contrib/","title":"fastjet-contrib","text":"

3rd party extensions of FastJet

https://fastjet.hepforge.org/contrib/

"},{"location":"available_software/detail/fastjet-contrib/#available-modules","title":"Available modules","text":"

The overview below shows which fastjet-contrib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastjet-contrib, load one of these modules using a module load command like:

module load fastjet-contrib/1.053-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet-contrib/1.053-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/fastjet/","title":"fastjet","text":"

A software package for jet finding in pp and e+e- collisions

https://fastjet.fr/

"},{"location":"available_software/detail/fastjet/#available-modules","title":"Available modules","text":"

The overview below shows which fastjet installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastjet, load one of these modules using a module load command like:

module load fastjet/3.4.2-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastjet/3.4.2-gompi-2023a x x x x x x x x x"},{"location":"available_software/detail/fastp/","title":"fastp","text":"

A tool designed to provide fast all-in-one preprocessing for FastQ files. This tool is developed in C++ with multithreading supported to afford high performance.

https://github.com/OpenGene/fastp

"},{"location":"available_software/detail/fastp/#available-modules","title":"Available modules","text":"

The overview below shows which fastp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fastp, load one of these modules using a module load command like:

module load fastp/0.23.4-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fastp/0.23.4-GCC-12.3.0 x x x x x x x x x fastp/0.23.4-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ffnvcodec/","title":"ffnvcodec","text":"

FFmpeg nvidia headers. Adds support for nvenc and nvdec. Requires Nvidia GPU and drivers to be present(picked up dynamically).

https://git.videolan.org/?p=ffmpeg/nv-codec-headers.git

"},{"location":"available_software/detail/ffnvcodec/#available-modules","title":"Available modules","text":"

The overview below shows which ffnvcodec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ffnvcodec, load one of these modules using a module load command like:

module load ffnvcodec/12.1.14.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ffnvcodec/12.1.14.0 x x x x x x x x x ffnvcodec/12.0.16.0 x x x x x x x x x ffnvcodec/11.1.5.2 x x x x x x - x x"},{"location":"available_software/detail/flatbuffers-python/","title":"flatbuffers-python","text":"

Python Flatbuffers runtime library.

https://github.com/google/flatbuffers/

"},{"location":"available_software/detail/flatbuffers-python/#available-modules","title":"Available modules","text":"

The overview below shows which flatbuffers-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flatbuffers-python, load one of these modules using a module load command like:

module load flatbuffers-python/23.5.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers-python/23.5.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flatbuffers/","title":"flatbuffers","text":"

FlatBuffers: Memory Efficient Serialization Library

https://github.com/google/flatbuffers/

"},{"location":"available_software/detail/flatbuffers/#available-modules","title":"Available modules","text":"

The overview below shows which flatbuffers installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flatbuffers, load one of these modules using a module load command like:

module load flatbuffers/23.5.26-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flatbuffers/23.5.26-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flit/","title":"flit","text":"

A simple packaging tool for simple packages.

https://github.com/pypa/flit

"},{"location":"available_software/detail/flit/#available-modules","title":"Available modules","text":"

The overview below shows which flit installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using flit, load one of these modules using a module load command like:

module load flit/3.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 flit/3.9.0-GCCcore-13.2.0 x x x x x x x x x flit/3.9.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/flit/#flit390-gcccore-1320","title":"flit/3.9.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

certifi-2023.7.22, charset-normalizer-3.3.1, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.2, requests-2.31.0, setuptools-scm-8.0.4, tomli_w-1.0.0, typing_extensions-4.8.0, urllib3-2.0.7

"},{"location":"available_software/detail/flit/#flit390-gcccore-1230","title":"flit/3.9.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

certifi-2023.5.7, charset-normalizer-3.1.0, docutils-0.20.1, flit-3.9.0, flit_scm-1.7.0, idna-3.4, packaging-23.1, requests-2.31.0, setuptools_scm-7.1.0, tomli_w-1.0.0, typing_extensions-4.6.3, urllib3-1.26.16

"},{"location":"available_software/detail/fontconfig/","title":"fontconfig","text":"

Fontconfig is a library designed to provide system-wide font configuration, customization and application access.

https://www.freedesktop.org/wiki/Software/fontconfig/

"},{"location":"available_software/detail/fontconfig/#available-modules","title":"Available modules","text":"

The overview below shows which fontconfig installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using fontconfig, load one of these modules using a module load command like:

module load fontconfig/2.14.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 fontconfig/2.14.2-GCCcore-13.2.0 x x x x x x x x x fontconfig/2.14.2-GCCcore-12.3.0 x x x x x x x x x fontconfig/2.14.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/foss/","title":"foss","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.

https://easybuild.readthedocs.io/en/master/Common-toolchains.html#foss-toolchain

"},{"location":"available_software/detail/foss/#available-modules","title":"Available modules","text":"

The overview below shows which foss installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using foss, load one of these modules using a module load command like:

module load foss/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 foss/2023b x x x x x x x x x foss/2023a x x x x x x x x x foss/2022b x x x x x x - x x"},{"location":"available_software/detail/freeglut/","title":"freeglut","text":"

freeglut is a completely OpenSourced alternative to the OpenGL Utility Toolkit (GLUT) library.

http://freeglut.sourceforge.net/

"},{"location":"available_software/detail/freeglut/#available-modules","title":"Available modules","text":"

The overview below shows which freeglut installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using freeglut, load one of these modules using a module load command like:

module load freeglut/3.4.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 freeglut/3.4.0-GCCcore-12.3.0 x x x x x x x x x freeglut/3.4.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/freetype/","title":"freetype","text":"

FreeType 2 is a software font engine that is designed to be small, efficient, highly customizable, and portable while capable of producing high-quality output (glyph images). It can be used in graphics libraries, display servers, font conversion tools, text image generation tools, and many other products as well.

https://www.freetype.org

"},{"location":"available_software/detail/freetype/#available-modules","title":"Available modules","text":"

The overview below shows which freetype installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using freetype, load one of these modules using a module load command like:

module load freetype/2.13.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 freetype/2.13.2-GCCcore-13.2.0 x x x x x x x x x freetype/2.13.0-GCCcore-12.3.0 x x x x x x x x x freetype/2.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/geopandas/","title":"geopandas","text":"

GeoPandas is a project to add support for geographic data to pandas objects.It currently implements GeoSeries and GeoDataFrame types which are subclasses of pandas.Seriesand pandas.DataFrame respectively. GeoPandas objects can act on shapely geometry objects andperform geometric operations.

https://geopandas.org

"},{"location":"available_software/detail/geopandas/#available-modules","title":"Available modules","text":"

The overview below shows which geopandas installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using geopandas, load one of these modules using a module load command like:

module load geopandas/0.14.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 geopandas/0.14.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/geopandas/#geopandas0142-foss-2023a","title":"geopandas/0.14.2-foss-2023a","text":"

This is a list of extensions included in the module:

geopandas-0.14.2, mapclassify-2.6.1

"},{"location":"available_software/detail/gfbf/","title":"gfbf","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including FlexiBLAS (BLAS and LAPACK support) and (serial) FFTW.

(none)

"},{"location":"available_software/detail/gfbf/#available-modules","title":"Available modules","text":"

The overview below shows which gfbf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gfbf, load one of these modules using a module load command like:

module load gfbf/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gfbf/2023b x x x x x x x x x gfbf/2023a x x x x x x x x x gfbf/2022b x x x x x x - x x"},{"location":"available_software/detail/giflib/","title":"giflib","text":"

giflib is a library for reading and writing gif images.It is API and ABI compatible with libungif which was in wide use whilethe LZW compression algorithm was patented.

http://giflib.sourceforge.net/

"},{"location":"available_software/detail/giflib/#available-modules","title":"Available modules","text":"

The overview below shows which giflib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using giflib, load one of these modules using a module load command like:

module load giflib/5.2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 giflib/5.2.1-GCCcore-13.2.0 x x x x x x x x x giflib/5.2.1-GCCcore-12.3.0 x x x x x x x x x giflib/5.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/git/","title":"git","text":"

Git is a free and open source distributed version control system designedto handle everything from small to very large projects with speed and efficiency.

https://git-scm.com

"},{"location":"available_software/detail/git/#available-modules","title":"Available modules","text":"

The overview below shows which git installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using git, load one of these modules using a module load command like:

module load git/2.42.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 git/2.42.0-GCCcore-13.2.0 x x x x x x x x x git/2.41.0-GCCcore-12.3.0-nodocs x x x x x x x x x git/2.38.1-GCCcore-12.2.0-nodocs x x x x x x - x x"},{"location":"available_software/detail/gmpy2/","title":"gmpy2","text":"

GMP/MPIR, MPFR, and MPC interface to Python 2.6+ and 3.x

https://github.com/aleaxit/gmpy

"},{"location":"available_software/detail/gmpy2/#available-modules","title":"Available modules","text":"

The overview below shows which gmpy2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gmpy2, load one of these modules using a module load command like:

module load gmpy2/2.1.5-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmpy2/2.1.5-GCC-13.2.0 x x x x x x x x x gmpy2/2.1.5-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/gmsh/","title":"gmsh","text":"

Gmsh is a 3D finite element grid generator with a build-in CAD engine and post-processor.

https://gmsh.info/

"},{"location":"available_software/detail/gmsh/#available-modules","title":"Available modules","text":"

The overview below shows which gmsh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gmsh, load one of these modules using a module load command like:

module load gmsh/4.12.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gmsh/4.12.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/gnuplot/","title":"gnuplot","text":"

Portable interactive, function plotting utility

http://gnuplot.sourceforge.net

"},{"location":"available_software/detail/gnuplot/#available-modules","title":"Available modules","text":"

The overview below shows which gnuplot installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gnuplot, load one of these modules using a module load command like:

module load gnuplot/5.4.8-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gnuplot/5.4.8-GCCcore-12.3.0 x x x x x x x x x gnuplot/5.4.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/gompi/","title":"gompi","text":"

GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.

(none)

"},{"location":"available_software/detail/gompi/#available-modules","title":"Available modules","text":"

The overview below shows which gompi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gompi, load one of these modules using a module load command like:

module load gompi/2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gompi/2023b x x x x x x x x x gompi/2023a x x x x x x x x x gompi/2022b x x x x x x - x x"},{"location":"available_software/detail/googletest/","title":"googletest","text":"

Google's framework for writing C++ tests on a variety of platforms

https://github.com/google/googletest

"},{"location":"available_software/detail/googletest/#available-modules","title":"Available modules","text":"

The overview below shows which googletest installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using googletest, load one of these modules using a module load command like:

module load googletest/1.14.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 googletest/1.14.0-GCCcore-13.2.0 x x x x x x x x x googletest/1.13.0-GCCcore-12.3.0 x x x x x x x x x googletest/1.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/graphite2/","title":"graphite2","text":"

Graphite is a \"smart font\" system developed specifically to handle the complexities of lesser-known languages of the world.

https://scripts.sil.org/cms/scripts/page.php?site_id=projects&item_id=graphite_home

"},{"location":"available_software/detail/graphite2/#available-modules","title":"Available modules","text":"

The overview below shows which graphite2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using graphite2, load one of these modules using a module load command like:

module load graphite2/1.3.14-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 graphite2/1.3.14-GCCcore-13.2.0 x x x x x x x x x graphite2/1.3.14-GCCcore-12.3.0 x x x x x x x x x graphite2/1.3.14-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/groff/","title":"groff","text":"

Groff (GNU troff) is a typesetting system that reads plain text mixed with formatting commands and produces formatted output.

https://www.gnu.org/software/groff

"},{"location":"available_software/detail/groff/#available-modules","title":"Available modules","text":"

The overview below shows which groff installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using groff, load one of these modules using a module load command like:

module load groff/1.22.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 groff/1.22.4-GCCcore-12.3.0 x x x x x x x x x groff/1.22.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/grpcio/","title":"grpcio","text":"

gRPC is a modern, open source, high-performance remote procedure call (RPC)framework that can run anywhere. gRPC enables client and server applications tocommunicate transparently, and simplifies the building of connected systems.

https://grpc.io/

"},{"location":"available_software/detail/grpcio/#available-modules","title":"Available modules","text":"

The overview below shows which grpcio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using grpcio, load one of these modules using a module load command like:

module load grpcio/1.57.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 grpcio/1.57.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/grpcio/#grpcio1570-gcccore-1230","title":"grpcio/1.57.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

grpcio-1.57.0

"},{"location":"available_software/detail/gtk-doc/","title":"gtk-doc","text":"

Documentation tool for public library API

https://gitlab.gnome.org/GNOME/gtk-doc

"},{"location":"available_software/detail/gtk-doc/#available-modules","title":"Available modules","text":"

The overview below shows which gtk-doc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gtk-doc, load one of these modules using a module load command like:

module load gtk-doc/1.34.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gtk-doc/1.34.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/gzip/","title":"gzip","text":"

gzip (GNU zip) is a popular data compression program as a replacement for compress

https://www.gnu.org/software/gzip/

"},{"location":"available_software/detail/gzip/#available-modules","title":"Available modules","text":"

The overview below shows which gzip installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using gzip, load one of these modules using a module load command like:

module load gzip/1.13-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 gzip/1.13-GCCcore-13.2.0 x x x x x x x x x gzip/1.12-GCCcore-12.3.0 x x x x x x x x x gzip/1.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/h5netcdf/","title":"h5netcdf","text":"

A Python interface for the netCDF4 file-format that reads and writes local orremote HDF5 files directly via h5py or h5pyd, without relying on the UnidatanetCDF library.

https://h5netcdf.org/

"},{"location":"available_software/detail/h5netcdf/#available-modules","title":"Available modules","text":"

The overview below shows which h5netcdf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using h5netcdf, load one of these modules using a module load command like:

module load h5netcdf/1.2.0-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5netcdf/1.2.0-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/h5netcdf/#h5netcdf120-foss-2023a","title":"h5netcdf/1.2.0-foss-2023a","text":"

This is a list of extensions included in the module:

h5netcdf-1.2.0

"},{"location":"available_software/detail/h5py/","title":"h5py","text":"

HDF5 for Python (h5py) is a general-purpose Python interface to the Hierarchical Data Format library, version 5. HDF5 is a versatile, mature scientific software library designed for the fast, flexible storage of enormous amounts of data.

https://www.h5py.org/

"},{"location":"available_software/detail/h5py/#available-modules","title":"Available modules","text":"

The overview below shows which h5py installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using h5py, load one of these modules using a module load command like:

module load h5py/3.11.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 h5py/3.11.0-foss-2023b x x x x x x x x x h5py/3.9.0-foss-2023a x x x x x x x x x h5py/3.8.0-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/hatch-jupyter-builder/","title":"hatch-jupyter-builder","text":"

Hatch Jupyter Builder is a plugin for the hatchling Python build backend. It isprimarily targeted for package authors who are providing JavaScript as part oftheir Python packages.Typical use cases are Jupyter Lab Extensions and Jupyter Widgets.

https://hatch-jupyter-builder.readthedocs.io

"},{"location":"available_software/detail/hatch-jupyter-builder/#available-modules","title":"Available modules","text":"

The overview below shows which hatch-jupyter-builder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hatch-jupyter-builder, load one of these modules using a module load command like:

module load hatch-jupyter-builder/0.9.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatch-jupyter-builder/0.9.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hatch-jupyter-builder/#hatch-jupyter-builder091-gcccore-1230","title":"hatch-jupyter-builder/0.9.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

hatch_jupyter_builder-0.9.1, hatch_nodejs_version-0.3.2

"},{"location":"available_software/detail/hatchling/","title":"hatchling","text":"

Extensible, standards compliant build backend used by Hatch,a modern, extensible Python project manager.

https://hatch.pypa.io

"},{"location":"available_software/detail/hatchling/#available-modules","title":"Available modules","text":"

The overview below shows which hatchling installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hatchling, load one of these modules using a module load command like:

module load hatchling/1.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hatchling/1.18.0-GCCcore-13.2.0 x x x x x x x x x hatchling/1.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1320","title":"hatchling/1.18.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

editables-0.5, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, pathspec-0.11.2, pluggy-1.3.0, trove_classifiers-2023.10.18

"},{"location":"available_software/detail/hatchling/#hatchling1180-gcccore-1230","title":"hatchling/1.18.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

editables-0.3, hatch-requirements-txt-0.4.1, hatch_fancy_pypi_readme-23.1.0, hatch_vcs-0.3.0, hatchling-1.18.0, pathspec-0.11.1, pluggy-1.2.0, trove_classifiers-2023.5.24

"},{"location":"available_software/detail/hic-straw/","title":"hic-straw","text":"

Straw is a library which allows rapid streaming of contact data from .hic files.

https://github.com/aidenlab/straw

"},{"location":"available_software/detail/hic-straw/#available-modules","title":"Available modules","text":"

The overview below shows which hic-straw installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hic-straw, load one of these modules using a module load command like:

module load hic-straw/1.3.1-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hic-straw/1.3.1-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/hiredis/","title":"hiredis","text":"

Hiredis is a minimalistic C client library for the Redis database.It is minimalistic because it just adds minimal support for the protocol,but at the same time it uses a high level printf-alike API in order tomake it much higher level than otherwise suggested by its minimal code baseand the lack of explicit bindings for every Redis command.

https://github.com/redis/hiredis

"},{"location":"available_software/detail/hiredis/#available-modules","title":"Available modules","text":"

The overview below shows which hiredis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hiredis, load one of these modules using a module load command like:

module load hiredis/1.2.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hiredis/1.2.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/hwloc/","title":"hwloc","text":"

The Portable Hardware Locality (hwloc) software package provides a portable abstraction (across OS, versions, architectures, ...) of the hierarchical topology of modern architectures, including NUMA memory nodes, sockets, shared caches, cores and simultaneous multithreading. It also gathers various system attributes such as cache and memory information as well as the locality of I/O devices such as network interfaces, InfiniBand HCAs or GPUs. It primarily aims at helping applications with gathering information about modern computing hardware so as to exploit it accordingly and efficiently.

https://www.open-mpi.org/projects/hwloc/

"},{"location":"available_software/detail/hwloc/#available-modules","title":"Available modules","text":"

The overview below shows which hwloc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hwloc, load one of these modules using a module load command like:

module load hwloc/2.9.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hwloc/2.9.2-GCCcore-13.2.0 x x x x x x x x x hwloc/2.9.1-GCCcore-12.3.0 x x x x x x x x x hwloc/2.8.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/hypothesis/","title":"hypothesis","text":"

Hypothesis is an advanced testing library for Python. It lets you write tests which are parametrized by a source of examples, and then generates simple and comprehensible examples that make your tests fail. This lets you find more bugs in your code with less work.

https://github.com/HypothesisWorks/hypothesis

"},{"location":"available_software/detail/hypothesis/#available-modules","title":"Available modules","text":"

The overview below shows which hypothesis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using hypothesis, load one of these modules using a module load command like:

module load hypothesis/6.90.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 hypothesis/6.90.0-GCCcore-13.2.0 x x x x x x x x x hypothesis/6.82.0-GCCcore-12.3.0 x x x x x x x x x hypothesis/6.68.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/ipympl/","title":"ipympl","text":"

Leveraging the Jupyter interactive widgets framework, ipympl enables theinteractive features of matplotlib in the Jupyter notebook and in JupyterLab.Besides, the figure canvas element is a proper Jupyter interactive widget whichcan be positioned in interactive widget layouts.

https://matplotlib.org/ipympl

"},{"location":"available_software/detail/ipympl/#available-modules","title":"Available modules","text":"

The overview below shows which ipympl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ipympl, load one of these modules using a module load command like:

module load ipympl/0.9.3-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ipympl/0.9.3-gfbf-2023a x x x x x x x x x ipympl/0.9.3-foss-2023a x x x x x x - x x"},{"location":"available_software/detail/ipympl/#ipympl093-gfbf-2023a","title":"ipympl/0.9.3-gfbf-2023a","text":"

This is a list of extensions included in the module:

ipympl-0.9.3

"},{"location":"available_software/detail/ipympl/#ipympl093-foss-2023a","title":"ipympl/0.9.3-foss-2023a","text":"

This is a list of extensions included in the module:

ipympl-0.9.3

"},{"location":"available_software/detail/jbigkit/","title":"jbigkit","text":"

JBIG-KIT is a software implementation of the JBIG1 data compression standard (ITU-T T.82), which was designed for bi-level image data, such as scanned documents.

https://www.cl.cam.ac.uk/~mgk25/jbigkit/

"},{"location":"available_software/detail/jbigkit/#available-modules","title":"Available modules","text":"

The overview below shows which jbigkit installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jbigkit, load one of these modules using a module load command like:

module load jbigkit/2.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jbigkit/2.1-GCCcore-13.2.0 x x x x x x x x x jbigkit/2.1-GCCcore-12.3.0 x x x x x x x x x jbigkit/2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jedi/","title":"jedi","text":"

Jedi - an awesome autocompletion, static analysis and refactoring library for Python.

https://github.com/davidhalter/jedi

"},{"location":"available_software/detail/jedi/#available-modules","title":"Available modules","text":"

The overview below shows which jedi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jedi, load one of these modules using a module load command like:

module load jedi/0.19.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jedi/0.19.1-GCCcore-13.2.0 x x x x x x x x x jedi/0.19.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/jedi/#jedi0191-gcccore-1320","title":"jedi/0.19.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

jedi-0.19.1, parso-0.8.3

"},{"location":"available_software/detail/jedi/#jedi0190-gcccore-1230","title":"jedi/0.19.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

jedi-0.19.0, parso-0.8.3

"},{"location":"available_software/detail/jemalloc/","title":"jemalloc","text":"

jemalloc is a general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.

http://jemalloc.net

"},{"location":"available_software/detail/jemalloc/#available-modules","title":"Available modules","text":"

The overview below shows which jemalloc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jemalloc, load one of these modules using a module load command like:

module load jemalloc/5.3.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jemalloc/5.3.0-GCCcore-12.3.0 x x x x x x x x x jemalloc/5.3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jq/","title":"jq","text":"

jq is a lightweight and flexible command-line JSON processor.

https://stedolan.github.io/jq/

"},{"location":"available_software/detail/jq/#available-modules","title":"Available modules","text":"

The overview below shows which jq installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jq, load one of these modules using a module load command like:

module load jq/1.6-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jq/1.6-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/json-c/","title":"json-c","text":"

JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C, output them as JSON formatted strings and parse JSON formatted strings back into the C representation of JSONobjects.

https://github.com/json-c/json-c

"},{"location":"available_software/detail/json-c/#available-modules","title":"Available modules","text":"

The overview below shows which json-c installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using json-c, load one of these modules using a module load command like:

module load json-c/0.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 json-c/0.17-GCCcore-13.2.0 x x x x x x x x x json-c/0.16-GCCcore-12.3.0 x x x x x x x x x json-c/0.16-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/jupyter-server/","title":"jupyter-server","text":"

The Jupyter Server provides the backend (i.e. the core services, APIs, and RESTendpoints) for Jupyter web applications like Jupyter notebook, JupyterLab, andVoila.

https://jupyter.org/

"},{"location":"available_software/detail/jupyter-server/#available-modules","title":"Available modules","text":"

The overview below shows which jupyter-server installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using jupyter-server, load one of these modules using a module load command like:

module load jupyter-server/2.7.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 jupyter-server/2.7.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/jupyter-server/#jupyter-server272-gcccore-1230","title":"jupyter-server/2.7.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

anyio-3.7.1, argon2-cffi-bindings-21.2.0, argon2_cffi-23.1.0, arrow-1.2.3, bleach-6.0.0, comm-0.1.4, debugpy-1.6.7.post1, defusedxml-0.7.1, deprecation-2.1.0, fastjsonschema-2.18.0, hatch_jupyter_builder-0.8.3, hatch_nodejs_version-0.3.1, ipykernel-6.25.1, ipython_genutils-0.2.0, ipywidgets-8.1.0, jsonschema-4.18.0, jsonschema_specifications-2023.7.1, jupyter_client-8.3.0, jupyter_core-5.3.1, jupyter_events-0.7.0, jupyter_packaging-0.12.3, jupyter_server-2.7.2, jupyter_server_terminals-0.4.4, jupyterlab_pygments-0.2.2, jupyterlab_widgets-3.0.8, mistune-3.0.1, nbclient-0.8.0, nbconvert-7.7.4, nbformat-5.9.2, nest_asyncio-1.5.7, notebook_shim-0.2.3, overrides-7.4.0, pandocfilters-1.5.0, prometheus_client-0.17.1, python-json-logger-2.0.7, referencing-0.30.2, rfc3339_validator-0.1.4, rfc3986_validator-0.1.1, rpds_py-0.9.2, Send2Trash-1.8.2, sniffio-1.3.0, terminado-0.17.1, tinycss2-1.2.1, websocket-client-1.6.1, widgetsnbextension-4.0.8

"},{"location":"available_software/detail/kim-api/","title":"kim-api","text":"

Open Knowledgebase of Interatomic Models.KIM is an API and OpenKIM is a collection of interatomic models (potentials) foratomistic simulations. This is a library that can be used by simulation programsto get access to the models in the OpenKIM database.This EasyBuild only installs the API, the models can be installed with thepackage openkim-models, or the user can install them manually by running kim-api-collections-management install user MODELNAMEor kim-api-collections-management install user OpenKIMto install them all.

https://openkim.org/

"},{"location":"available_software/detail/kim-api/#available-modules","title":"Available modules","text":"

The overview below shows which kim-api installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using kim-api, load one of these modules using a module load command like:

module load kim-api/2.3.0-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 kim-api/2.3.0-GCC-13.2.0 x x x x x x x x x kim-api/2.3.0-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libGLU/","title":"libGLU","text":"

The OpenGL Utility Library (GLU) is a computer graphics library for OpenGL.

https://mesa.freedesktop.org/archive/glu/

"},{"location":"available_software/detail/libGLU/#available-modules","title":"Available modules","text":"

The overview below shows which libGLU installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libGLU, load one of these modules using a module load command like:

module load libGLU/9.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libGLU/9.0.3-GCCcore-13.2.0 x x x x x x x x x libGLU/9.0.3-GCCcore-12.3.0 x x x x x x x x x libGLU/9.0.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libaec/","title":"libaec","text":"

Libaec provides fast lossless compression of 1 up to 32 bit wide signed or unsigned integers(samples). The library achieves best results for low entropy data as often encountered in space imaginginstrument data or numerical model output from weather or climate simulations. While floating point representationsare not directly supported, they can also be efficiently coded by grouping exponents and mantissa.

https://gitlab.dkrz.de/k202009/libaec

"},{"location":"available_software/detail/libaec/#available-modules","title":"Available modules","text":"

The overview below shows which libaec installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libaec, load one of these modules using a module load command like:

module load libaec/1.0.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaec/1.0.6-GCCcore-13.2.0 x x x x x x x x x libaec/1.0.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libaio/","title":"libaio","text":"

Asynchronous input/output library that uses the kernels native interface.

https://pagure.io/libaio

"},{"location":"available_software/detail/libaio/#available-modules","title":"Available modules","text":"

The overview below shows which libaio installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libaio, load one of these modules using a module load command like:

module load libaio/0.3.113-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libaio/0.3.113-GCCcore-12.3.0 x x x x x x x x x libaio/0.3.113-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libarchive/","title":"libarchive","text":"

Multi-format archive and compression library

https://www.libarchive.org/

"},{"location":"available_software/detail/libarchive/#available-modules","title":"Available modules","text":"

The overview below shows which libarchive installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libarchive, load one of these modules using a module load command like:

module load libarchive/3.7.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libarchive/3.7.2-GCCcore-13.2.0 x x x x x x x x x libarchive/3.6.2-GCCcore-12.3.0 x x x x x x x x x libarchive/3.6.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libcerf/","title":"libcerf","text":"

libcerf is a self-contained numeric library that provides an efficient and accurate implementation of complex error functions, along with Dawson, Faddeeva, and Voigt functions.

https://jugit.fz-juelich.de/mlz/libcerf

"},{"location":"available_software/detail/libcerf/#available-modules","title":"Available modules","text":"

The overview below shows which libcerf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libcerf, load one of these modules using a module load command like:

module load libcerf/2.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcerf/2.3-GCCcore-12.3.0 x x x x x x x x x libcerf/2.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libcint/","title":"libcint","text":"

libcint is an open source library for analytical Gaussian integrals.

https://github.com/sunqm/libcint

"},{"location":"available_software/detail/libcint/#available-modules","title":"Available modules","text":"

The overview below shows which libcint installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libcint, load one of these modules using a module load command like:

module load libcint/5.4.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libcint/5.4.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/libdeflate/","title":"libdeflate","text":"

Heavily optimized library for DEFLATE/zlib/gzip compression and decompression.

https://github.com/ebiggers/libdeflate

"},{"location":"available_software/detail/libdeflate/#available-modules","title":"Available modules","text":"

The overview below shows which libdeflate installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdeflate, load one of these modules using a module load command like:

module load libdeflate/1.19-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdeflate/1.19-GCCcore-13.2.0 x x x x x x x x x libdeflate/1.18-GCCcore-12.3.0 x x x x x x x x x libdeflate/1.15-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libdrm/","title":"libdrm","text":"

Direct Rendering Manager runtime library.

https://dri.freedesktop.org

"},{"location":"available_software/detail/libdrm/#available-modules","title":"Available modules","text":"

The overview below shows which libdrm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdrm, load one of these modules using a module load command like:

module load libdrm/2.4.117-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdrm/2.4.117-GCCcore-13.2.0 x x x x x x x x x libdrm/2.4.115-GCCcore-12.3.0 x x x x x x x x x libdrm/2.4.114-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libdwarf/","title":"libdwarf","text":"

The DWARF Debugging Information Format is of interest to programmers working on compilersand debuggers (and anyone interested in reading or writing DWARF information))

https://www.prevanders.net/dwarf.html

"},{"location":"available_software/detail/libdwarf/#available-modules","title":"Available modules","text":"

The overview below shows which libdwarf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libdwarf, load one of these modules using a module load command like:

module load libdwarf/0.9.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libdwarf/0.9.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libepoxy/","title":"libepoxy","text":"

Epoxy is a library for handling OpenGL function pointer management for you

https://github.com/anholt/libepoxy

"},{"location":"available_software/detail/libepoxy/#available-modules","title":"Available modules","text":"

The overview below shows which libepoxy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libepoxy, load one of these modules using a module load command like:

module load libepoxy/1.5.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libepoxy/1.5.10-GCCcore-13.2.0 x x x x x x x x x libepoxy/1.5.10-GCCcore-12.3.0 x x x x x x x x x libepoxy/1.5.10-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libevent/","title":"libevent","text":"

The libevent API provides a mechanism to execute a callback function when a specific event occurs on a file descriptor or after a timeout has been reached. Furthermore, libevent also support callbacks due to signals or regular timeouts.

https://libevent.org/

"},{"location":"available_software/detail/libevent/#available-modules","title":"Available modules","text":"

The overview below shows which libevent installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libevent, load one of these modules using a module load command like:

module load libevent/2.1.12-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libevent/2.1.12-GCCcore-13.2.0 x x x x x x x x x libevent/2.1.12-GCCcore-12.3.0 x x x x x x x x x libevent/2.1.12-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libfabric/","title":"libfabric","text":"

Libfabric is a core component of OFI. It is the library that defines and exportsthe user-space API of OFI, and is typically the only software that applicationsdeal with directly. It works in conjunction with provider libraries, which areoften integrated directly into libfabric.

https://ofiwg.github.io/libfabric/

"},{"location":"available_software/detail/libfabric/#available-modules","title":"Available modules","text":"

The overview below shows which libfabric installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libfabric, load one of these modules using a module load command like:

module load libfabric/1.19.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libfabric/1.19.0-GCCcore-13.2.0 x x x x x x x x x libfabric/1.18.0-GCCcore-12.3.0 x x x x x x x x x libfabric/1.16.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libffi/","title":"libffi","text":"

The libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run-time.

https://sourceware.org/libffi/

"},{"location":"available_software/detail/libffi/#available-modules","title":"Available modules","text":"

The overview below shows which libffi installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libffi, load one of these modules using a module load command like:

module load libffi/3.4.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libffi/3.4.4-GCCcore-13.2.0 x x x x x x x x x libffi/3.4.4-GCCcore-12.3.0 x x x x x x x x x libffi/3.4.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgcrypt/","title":"libgcrypt","text":"

Libgcrypt is a general purpose cryptographic library originally based on code from GnuPG

https://gnupg.org/related_software/libgcrypt/index.html

"},{"location":"available_software/detail/libgcrypt/#available-modules","title":"Available modules","text":"

The overview below shows which libgcrypt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgcrypt, load one of these modules using a module load command like:

module load libgcrypt/1.10.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgcrypt/1.10.3-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libgd/","title":"libgd","text":"

GD is an open source code library for the dynamic creation of images by programmers.

https://libgd.github.io

"},{"location":"available_software/detail/libgd/#available-modules","title":"Available modules","text":"

The overview below shows which libgd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgd, load one of these modules using a module load command like:

module load libgd/2.3.3-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgd/2.3.3-GCCcore-12.3.0 x x x x x x x x x libgd/2.3.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgeotiff/","title":"libgeotiff","text":"

Library for reading and writing coordinate system information from/to GeoTIFF files

https://directory.fsf.org/wiki/Libgeotiff

"},{"location":"available_software/detail/libgeotiff/#available-modules","title":"Available modules","text":"

The overview below shows which libgeotiff installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgeotiff, load one of these modules using a module load command like:

module load libgeotiff/1.7.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgeotiff/1.7.3-GCCcore-13.2.0 x x x x x x x x x libgeotiff/1.7.1-GCCcore-12.3.0 x x x x x x x x x libgeotiff/1.7.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgit2/","title":"libgit2","text":"

libgit2 is a portable, pure C implementation of the Git core methods provided as a re-entrantlinkable library with a solid API, allowing you to write native speed custom Git applications in any languagewhich supports C bindings.

https://libgit2.org/

"},{"location":"available_software/detail/libgit2/#available-modules","title":"Available modules","text":"

The overview below shows which libgit2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgit2, load one of these modules using a module load command like:

module load libgit2/1.7.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgit2/1.7.2-GCCcore-13.2.0 x x x x x x x x x libgit2/1.7.1-GCCcore-12.3.0 x x x x x x x x x libgit2/1.5.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libglvnd/","title":"libglvnd","text":"

libglvnd is a vendor-neutral dispatch layer for arbitrating OpenGL API calls between multiple vendors.

https://gitlab.freedesktop.org/glvnd/libglvnd

"},{"location":"available_software/detail/libglvnd/#available-modules","title":"Available modules","text":"

The overview below shows which libglvnd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libglvnd, load one of these modules using a module load command like:

module load libglvnd/1.7.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libglvnd/1.7.0-GCCcore-13.2.0 x x x x x x x x x libglvnd/1.6.0-GCCcore-12.3.0 x x x x x x x x x libglvnd/1.6.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libgpg-error/","title":"libgpg-error","text":"

Libgpg-error is a small library that defines common error values for all GnuPG components.

https://gnupg.org/related_software/libgpg-error/index.html

"},{"location":"available_software/detail/libgpg-error/#available-modules","title":"Available modules","text":"

The overview below shows which libgpg-error installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libgpg-error, load one of these modules using a module load command like:

module load libgpg-error/1.48-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libgpg-error/1.48-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libiconv/","title":"libiconv","text":"

Libiconv converts from one character encoding to another through Unicode conversion

https://www.gnu.org/software/libiconv

"},{"location":"available_software/detail/libiconv/#available-modules","title":"Available modules","text":"

The overview below shows which libiconv installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libiconv, load one of these modules using a module load command like:

module load libiconv/1.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libiconv/1.17-GCCcore-13.2.0 x x x x x x x x x libiconv/1.17-GCCcore-12.3.0 x x x x x x x x x libiconv/1.17-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libidn2/","title":"libidn2","text":"

Libidn2 implements the revised algorithm for internationalized domain names called IDNA2008/TR46.

http://www.gnu.org/software/libidn2

"},{"location":"available_software/detail/libidn2/#available-modules","title":"Available modules","text":"

The overview below shows which libidn2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libidn2, load one of these modules using a module load command like:

module load libidn2/2.3.7-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libidn2/2.3.7-GCCcore-12.3.0 x x x x x x x x x libidn2/2.3.2-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libjpeg-turbo/","title":"libjpeg-turbo","text":"

libjpeg-turbo is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. libjpeg is a library that implements JPEG image encoding, decoding and transcoding.

https://sourceforge.net/projects/libjpeg-turbo/

"},{"location":"available_software/detail/libjpeg-turbo/#available-modules","title":"Available modules","text":"

The overview below shows which libjpeg-turbo installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libjpeg-turbo, load one of these modules using a module load command like:

module load libjpeg-turbo/3.0.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libjpeg-turbo/3.0.1-GCCcore-13.2.0 x x x x x x x x x libjpeg-turbo/2.1.5.1-GCCcore-12.3.0 x x x x x x x x x libjpeg-turbo/2.1.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libogg/","title":"libogg","text":"

Ogg is a multimedia container format, and the native file and stream format for the Xiph.orgmultimedia codecs.

https://xiph.org/ogg/

"},{"location":"available_software/detail/libogg/#available-modules","title":"Available modules","text":"

The overview below shows which libogg installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libogg, load one of these modules using a module load command like:

module load libogg/1.3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libogg/1.3.5-GCCcore-13.2.0 x x x x x x x x x libogg/1.3.5-GCCcore-12.3.0 x x x x x x x x x libogg/1.3.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libopus/","title":"libopus","text":"

Opus is a totally open, royalty-free, highly versatile audio codec. Opus is unmatched for interactive speech and music transmission over the Internet, but is also intended for storage and streaming applications. It is standardized by the Internet Engineering Task Force (IETF) as RFC 6716 which incorporated technology from Skype\u2019s SILK codec and Xiph.Org\u2019s CELT codec.

https://www.opus-codec.org/

"},{"location":"available_software/detail/libopus/#available-modules","title":"Available modules","text":"

The overview below shows which libopus installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libopus, load one of these modules using a module load command like:

module load libopus/1.5.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libopus/1.5.2-GCCcore-13.2.0 x x x x x x x x x libopus/1.4-GCCcore-12.3.0 x x x x x x x x x libopus/1.3.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libpciaccess/","title":"libpciaccess","text":"

Generic PCI access library.

https://cgit.freedesktop.org/xorg/lib/libpciaccess/

"},{"location":"available_software/detail/libpciaccess/#available-modules","title":"Available modules","text":"

The overview below shows which libpciaccess installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libpciaccess, load one of these modules using a module load command like:

module load libpciaccess/0.17-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpciaccess/0.17-GCCcore-13.2.0 x x x x x x x x x libpciaccess/0.17-GCCcore-12.3.0 x x x x x x x x x libpciaccess/0.17-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libpng/","title":"libpng","text":"

libpng is the official PNG reference library

http://www.libpng.org/pub/png/libpng.html

"},{"location":"available_software/detail/libpng/#available-modules","title":"Available modules","text":"

The overview below shows which libpng installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libpng, load one of these modules using a module load command like:

module load libpng/1.6.40-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libpng/1.6.40-GCCcore-13.2.0 x x x x x x x x x libpng/1.6.39-GCCcore-12.3.0 x x x x x x x x x libpng/1.6.38-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/librosa/","title":"librosa","text":"

Audio and music processing in Python

https://librosa.org/

"},{"location":"available_software/detail/librosa/#available-modules","title":"Available modules","text":"

The overview below shows which librosa installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using librosa, load one of these modules using a module load command like:

module load librosa/0.10.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 librosa/0.10.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/librosa/#librosa0101-foss-2023a","title":"librosa/0.10.1-foss-2023a","text":"

This is a list of extensions included in the module:

audioread-3.0.1, lazy_loader-0.3, librosa-0.10.1, resampy-0.4.3, soundfile-0.12.1, soxr-0.3.7

"},{"location":"available_software/detail/libsndfile/","title":"libsndfile","text":"

Libsndfile is a C library for reading and writing files containing sampled sound (such as MS Windows WAV and the Apple/SGI AIFF format) through one standard library interface.

http://www.mega-nerd.com/libsndfile

"},{"location":"available_software/detail/libsndfile/#available-modules","title":"Available modules","text":"

The overview below shows which libsndfile installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libsndfile, load one of these modules using a module load command like:

module load libsndfile/1.2.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsndfile/1.2.2-GCCcore-13.2.0 x x x x x x x x x libsndfile/1.2.2-GCCcore-12.3.0 x x x x x x x x x libsndfile/1.2.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libsodium/","title":"libsodium","text":"

Sodium is a modern, easy-to-use software library for encryption, decryption, signatures, password hashing and more.

https://doc.libsodium.org/

"},{"location":"available_software/detail/libsodium/#available-modules","title":"Available modules","text":"

The overview below shows which libsodium installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libsodium, load one of these modules using a module load command like:

module load libsodium/1.0.19-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libsodium/1.0.19-GCCcore-13.2.0 x x x x x x x x x libsodium/1.0.18-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libspatialindex/","title":"libspatialindex","text":"

C++ implementation of R*-tree, an MVR-tree and a TPR-tree with C API

https://libspatialindex.org

"},{"location":"available_software/detail/libspatialindex/#available-modules","title":"Available modules","text":"

The overview below shows which libspatialindex installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libspatialindex, load one of these modules using a module load command like:

module load libspatialindex/1.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libspatialindex/1.9.3-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/libtirpc/","title":"libtirpc","text":"

Libtirpc is a port of Suns Transport-Independent RPC library to Linux.

https://sourceforge.net/projects/libtirpc/

"},{"location":"available_software/detail/libtirpc/#available-modules","title":"Available modules","text":"

The overview below shows which libtirpc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libtirpc, load one of these modules using a module load command like:

module load libtirpc/1.3.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libtirpc/1.3.4-GCCcore-13.2.0 x x x x x x x x x libtirpc/1.3.3-GCCcore-12.3.0 x x x x x x x x x libtirpc/1.3.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libunwind/","title":"libunwind","text":"

The primary goal of libunwind is to define a portable and efficient C programming interface (API) to determine the call-chain of a program. The API additionally provides the means to manipulate the preserved (callee-saved) state of each call-frame and to resume execution at any point in the call-chain (non-local goto). The API supports both local (same-process) and remote (across-process) operation. As such, the API is useful in a number of applications

https://www.nongnu.org/libunwind/

"},{"location":"available_software/detail/libunwind/#available-modules","title":"Available modules","text":"

The overview below shows which libunwind installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libunwind, load one of these modules using a module load command like:

module load libunwind/1.6.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libunwind/1.6.2-GCCcore-13.2.0 x x x x x x x x x libunwind/1.6.2-GCCcore-12.3.0 x x x x x x x x x libunwind/1.6.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libvorbis/","title":"libvorbis","text":"

Ogg Vorbis is a fully open, non-proprietary, patent-and-royalty-free, general-purpose compressedaudio format

https://xiph.org/vorbis/

"},{"location":"available_software/detail/libvorbis/#available-modules","title":"Available modules","text":"

The overview below shows which libvorbis installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libvorbis, load one of these modules using a module load command like:

module load libvorbis/1.3.7-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libvorbis/1.3.7-GCCcore-13.2.0 x x x x x x x x x libvorbis/1.3.7-GCCcore-12.3.0 x x x x x x x x x libvorbis/1.3.7-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libvori/","title":"libvori","text":"

C++ library implementing the Voronoi integration as well as the compressed bqbfile format. The present version of libvori is a very early developmentversion, which is hard-coded to work with the CP2k program package.

https://brehm-research.de/libvori.php

"},{"location":"available_software/detail/libvori/#available-modules","title":"Available modules","text":"

The overview below shows which libvori installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libvori, load one of these modules using a module load command like:

module load libvori/220621-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libvori/220621-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libwebp/","title":"libwebp","text":"

WebP is a modern image format that provides superiorlossless and lossy compression for images on the web. Using WebP,webmasters and web developers can create smaller, richer images thatmake the web faster.

https://developers.google.com/speed/webp/

"},{"location":"available_software/detail/libwebp/#available-modules","title":"Available modules","text":"

The overview below shows which libwebp installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libwebp, load one of these modules using a module load command like:

module load libwebp/1.3.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libwebp/1.3.2-GCCcore-13.2.0 x x x x x x x x x libwebp/1.3.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libxc/","title":"libxc","text":"

Libxc is a library of exchange-correlation functionals for density-functional theory. The aim is to provide a portable, well tested and reliable set of exchange and correlation functionals.

https://www.tddft.org/programs/libxc

"},{"location":"available_software/detail/libxc/#available-modules","title":"Available modules","text":"

The overview below shows which libxc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxc, load one of these modules using a module load command like:

module load libxc/6.2.2-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxc/6.2.2-GCC-12.3.0 x x x x x x x x x libxc/6.1.0-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxml2-python/","title":"libxml2-python","text":"

Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform). This is the Python binding.

http://xmlsoft.org/

"},{"location":"available_software/detail/libxml2-python/#available-modules","title":"Available modules","text":"

The overview below shows which libxml2-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxml2-python, load one of these modules using a module load command like:

module load libxml2-python/2.11.4-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2-python/2.11.4-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/libxml2/","title":"libxml2","text":"

Libxml2 is the XML C parser and toolchain developed for the Gnome project (but usable outside of the Gnome platform).

http://xmlsoft.org/

"},{"location":"available_software/detail/libxml2/#available-modules","title":"Available modules","text":"

The overview below shows which libxml2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxml2, load one of these modules using a module load command like:

module load libxml2/2.11.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxml2/2.11.5-GCCcore-13.2.0 x x x x x x x x x libxml2/2.11.4-GCCcore-12.3.0 x x x x x x x x x libxml2/2.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxslt/","title":"libxslt","text":"

Libxslt is the XSLT C library developed for the GNOME project (but usable outside of the Gnome platform).

http://xmlsoft.org/

"},{"location":"available_software/detail/libxslt/#available-modules","title":"Available modules","text":"

The overview below shows which libxslt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxslt, load one of these modules using a module load command like:

module load libxslt/1.1.38-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxslt/1.1.38-GCCcore-13.2.0 x x x x x x x x x libxslt/1.1.38-GCCcore-12.3.0 x x x x x x x x x libxslt/1.1.37-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/libxsmm/","title":"libxsmm","text":"

LIBXSMM is a library for small dense and small sparse matrix-matrix multiplicationstargeting Intel Architecture (x86).

https://github.com/hfp/libxsmm

"},{"location":"available_software/detail/libxsmm/#available-modules","title":"Available modules","text":"

The overview below shows which libxsmm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libxsmm, load one of these modules using a module load command like:

module load libxsmm/1.17-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libxsmm/1.17-GCC-12.3.0 - - - x x x x x x"},{"location":"available_software/detail/libyaml/","title":"libyaml","text":"

LibYAML is a YAML parser and emitter written in C.

https://pyyaml.org/wiki/LibYAML

"},{"location":"available_software/detail/libyaml/#available-modules","title":"Available modules","text":"

The overview below shows which libyaml installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using libyaml, load one of these modules using a module load command like:

module load libyaml/0.2.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 libyaml/0.2.5-GCCcore-13.2.0 x x x x x x x x x libyaml/0.2.5-GCCcore-12.3.0 x x x x x x x x x libyaml/0.2.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/lpsolve/","title":"lpsolve","text":"

Mixed Integer Linear Programming (MILP) solver

https://sourceforge.net/projects/lpsolve/

"},{"location":"available_software/detail/lpsolve/#available-modules","title":"Available modules","text":"

The overview below shows which lpsolve installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lpsolve, load one of these modules using a module load command like:

module load lpsolve/5.5.2.11-GCC-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lpsolve/5.5.2.11-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/lxml/","title":"lxml","text":"

The lxml XML toolkit is a Pythonic binding for the C libraries libxml2 and libxslt.

https://lxml.de/

"},{"location":"available_software/detail/lxml/#available-modules","title":"Available modules","text":"

The overview below shows which lxml installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lxml, load one of these modules using a module load command like:

module load lxml/4.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lxml/4.9.3-GCCcore-13.2.0 x x x x x x x x x lxml/4.9.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/lz4/","title":"lz4","text":"

LZ4 is lossless compression algorithm, providing compression speed at 400 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core.

https://lz4.github.io/lz4/

"},{"location":"available_software/detail/lz4/#available-modules","title":"Available modules","text":"

The overview below shows which lz4 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using lz4, load one of these modules using a module load command like:

module load lz4/1.9.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 lz4/1.9.4-GCCcore-13.2.0 x x x x x x x x x lz4/1.9.4-GCCcore-12.3.0 x x x x x x x x x lz4/1.9.4-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/make/","title":"make","text":"

GNU version of make utility

https://www.gnu.org/software/make/make.html

"},{"location":"available_software/detail/make/#available-modules","title":"Available modules","text":"

The overview below shows which make installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using make, load one of these modules using a module load command like:

module load make/4.4.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 make/4.4.1-GCCcore-13.2.0 x x x x x x x x x make/4.4.1-GCCcore-12.3.0 x x x x x x x x x make/4.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/mallard-ducktype/","title":"mallard-ducktype","text":"

Parser for the lightweight Ducktype syntax for Mallard

https://github.com/projectmallard/mallard-ducktype

"},{"location":"available_software/detail/mallard-ducktype/#available-modules","title":"Available modules","text":"

The overview below shows which mallard-ducktype installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mallard-ducktype, load one of these modules using a module load command like:

module load mallard-ducktype/1.0.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mallard-ducktype/1.0.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/matplotlib/","title":"matplotlib","text":"

matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.

https://matplotlib.org

"},{"location":"available_software/detail/matplotlib/#available-modules","title":"Available modules","text":"

The overview below shows which matplotlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using matplotlib, load one of these modules using a module load command like:

module load matplotlib/3.8.2-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 matplotlib/3.8.2-gfbf-2023b x x x x x x x x x matplotlib/3.7.2-gfbf-2023a x x x x x x x x x matplotlib/3.7.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/matplotlib/#matplotlib382-gfbf-2023b","title":"matplotlib/3.8.2-gfbf-2023b","text":"

This is a list of extensions included in the module:

contourpy-1.2.0, Cycler-0.12.1, fonttools-4.47.0, kiwisolver-1.4.5, matplotlib-3.8.2

"},{"location":"available_software/detail/matplotlib/#matplotlib372-gfbf-2023a","title":"matplotlib/3.7.2-gfbf-2023a","text":"

This is a list of extensions included in the module:

contourpy-1.1.0, Cycler-0.11.0, fonttools-4.42.0, kiwisolver-1.4.4, matplotlib-3.7.2

"},{"location":"available_software/detail/matplotlib/#matplotlib370-gfbf-2022b","title":"matplotlib/3.7.0-gfbf-2022b","text":"

This is a list of extensions included in the module:

contourpy-1.0.7, Cycler-0.11.0, fonttools-4.38.0, kiwisolver-1.4.4, matplotlib-3.7.0

"},{"location":"available_software/detail/maturin/","title":"maturin","text":"

This project is meant as a zero configurationreplacement for setuptools-rust and milksnake. It supports buildingwheels for python 3.5+ on windows, linux, mac and freebsd, can uploadthem to pypi and has basic pypy and graalpy support.

https://github.com/pyo3/maturin

"},{"location":"available_software/detail/maturin/#available-modules","title":"Available modules","text":"

The overview below shows which maturin installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using maturin, load one of these modules using a module load command like:

module load maturin/1.5.0-GCCcore-13.2.0-Rust-1.76.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 maturin/1.5.0-GCCcore-13.2.0-Rust-1.76.0 x x x x x x x x x maturin/1.4.0-GCCcore-12.3.0-Rust-1.75.0 x x x x x x x x x maturin/1.1.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/meson-python/","title":"meson-python","text":"

Python build backend (PEP 517) for Meson projects

https://github.com/mesonbuild/meson-python

"},{"location":"available_software/detail/meson-python/#available-modules","title":"Available modules","text":"

The overview below shows which meson-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using meson-python, load one of these modules using a module load command like:

module load meson-python/0.15.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 meson-python/0.15.0-GCCcore-13.2.0 x x x x x x x x x meson-python/0.15.0-GCCcore-12.3.0 x x x x x x x x x meson-python/0.13.2-GCCcore-12.3.0 x x x x x x x x x meson-python/0.11.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1320","title":"meson-python/0.15.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

meson-python-0.15.0, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0150-gcccore-1230","title":"meson-python/0.15.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

meson-python-0.15.0, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0132-gcccore-1230","title":"meson-python/0.13.2-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

meson-python-0.13.2, pyproject-metadata-0.7.1

"},{"location":"available_software/detail/meson-python/#meson-python0110-gcccore-1220","title":"meson-python/0.11.0-GCCcore-12.2.0","text":"

This is a list of extensions included in the module:

meson-python-0.11.0, pyproject-metadata-0.6.1

"},{"location":"available_software/detail/mpi4py/","title":"mpi4py","text":"

MPI for Python (mpi4py) provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.

https://github.com/mpi4py/mpi4py

"},{"location":"available_software/detail/mpi4py/#available-modules","title":"Available modules","text":"

The overview below shows which mpi4py installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mpi4py, load one of these modules using a module load command like:

module load mpi4py/3.1.5-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpi4py/3.1.5-gompi-2023b x x x x x x x x x mpi4py/3.1.4-gompi-2023a x x x x x x x x x mpi4py/3.1.4-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/mpi4py/#mpi4py315-gompi-2023b","title":"mpi4py/3.1.5-gompi-2023b","text":"

This is a list of extensions included in the module:

mpi4py-3.1.5

"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2023a","title":"mpi4py/3.1.4-gompi-2023a","text":"

This is a list of extensions included in the module:

mpi4py-3.1.4

"},{"location":"available_software/detail/mpi4py/#mpi4py314-gompi-2022b","title":"mpi4py/3.1.4-gompi-2022b","text":"

This is a list of extensions included in the module:

mpi4py-3.1.4

"},{"location":"available_software/detail/mpl-ascii/","title":"mpl-ascii","text":"

A matplotlib backend that produces plots using only ASCII characters

https://github.com/chriscave/mpl_ascii

"},{"location":"available_software/detail/mpl-ascii/#available-modules","title":"Available modules","text":"

The overview below shows which mpl-ascii installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using mpl-ascii, load one of these modules using a module load command like:

module load mpl-ascii/0.10.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 mpl-ascii/0.10.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/mpl-ascii/#mpl-ascii0100-gfbf-2023a","title":"mpl-ascii/0.10.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

mpl-ascii-0.10.0

"},{"location":"available_software/detail/multiprocess/","title":"multiprocess","text":"

better multiprocessing and multithreading in python

https://github.com/uqfoundation/multiprocess

"},{"location":"available_software/detail/multiprocess/#available-modules","title":"Available modules","text":"

The overview below shows which multiprocess installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using multiprocess, load one of these modules using a module load command like:

module load multiprocess/0.70.16-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 multiprocess/0.70.16-gfbf-2023b x x x x x x x x x"},{"location":"available_software/detail/ncbi-vdb/","title":"ncbi-vdb","text":"

The SRA Toolkit and SDK from NCBI is a collection of tools and libraries for using data in the INSDC Sequence Read Archives.

https://github.com/ncbi/ncbi-vdb

"},{"location":"available_software/detail/ncbi-vdb/#available-modules","title":"Available modules","text":"

The overview below shows which ncbi-vdb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ncbi-vdb, load one of these modules using a module load command like:

module load ncbi-vdb/3.0.10-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ncbi-vdb/3.0.10-gompi-2023a x x x x x x x x x ncbi-vdb/3.0.5-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/ncdu/","title":"ncdu","text":"

Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don't have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.

https://dev.yorhel.nl/ncdu

"},{"location":"available_software/detail/ncdu/#available-modules","title":"Available modules","text":"

The overview below shows which ncdu installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using ncdu, load one of these modules using a module load command like:

module load ncdu/1.18-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 ncdu/1.18-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/netCDF-Fortran/","title":"netCDF-Fortran","text":"

NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

https://www.unidata.ucar.edu/software/netcdf/

"},{"location":"available_software/detail/netCDF-Fortran/#available-modules","title":"Available modules","text":"

The overview below shows which netCDF-Fortran installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netCDF-Fortran, load one of these modules using a module load command like:

module load netCDF-Fortran/4.6.1-gompi-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF-Fortran/4.6.1-gompi-2023a x x x x x x x x x netCDF-Fortran/4.6.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/netCDF/","title":"netCDF","text":"

NetCDF (network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

https://www.unidata.ucar.edu/software/netcdf/

"},{"location":"available_software/detail/netCDF/#available-modules","title":"Available modules","text":"

The overview below shows which netCDF installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netCDF, load one of these modules using a module load command like:

module load netCDF/4.9.2-gompi-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netCDF/4.9.2-gompi-2023b x x x x x x x x x netCDF/4.9.2-gompi-2023a x x x x x x x x x netCDF/4.9.0-gompi-2022b x x x x x x - x x"},{"location":"available_software/detail/netcdf4-python/","title":"netcdf4-python","text":"

Python/numpy interface to netCDF.

https://unidata.github.io/netcdf4-python/

"},{"location":"available_software/detail/netcdf4-python/#available-modules","title":"Available modules","text":"

The overview below shows which netcdf4-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using netcdf4-python, load one of these modules using a module load command like:

module load netcdf4-python/1.6.4-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 netcdf4-python/1.6.4-foss-2023a x x x x x x x x x netcdf4-python/1.6.3-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/netcdf4-python/#netcdf4-python164-foss-2023a","title":"netcdf4-python/1.6.4-foss-2023a","text":"

This is a list of extensions included in the module:

cftime-1.6.2, netcdf4-python-1.6.4

"},{"location":"available_software/detail/netcdf4-python/#netcdf4-python163-foss-2022b","title":"netcdf4-python/1.6.3-foss-2022b","text":"

This is a list of extensions included in the module:

cftime-1.6.2, netcdf4-python-1.6.3

"},{"location":"available_software/detail/nettle/","title":"nettle","text":"

Nettle is a cryptographic library that is designed to fit easily in more or less any context: In crypto toolkits for object-oriented languages (C++, Python, Pike, ...), in applications like LSH or GNUPG, or even in kernel space.

https://www.lysator.liu.se/~nisse/nettle/

"},{"location":"available_software/detail/nettle/#available-modules","title":"Available modules","text":"

The overview below shows which nettle installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nettle, load one of these modules using a module load command like:

module load nettle/3.9.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nettle/3.9.1-GCCcore-13.2.0 x x x x x x x x x nettle/3.9.1-GCCcore-12.3.0 x x x x x x x x x nettle/3.8.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/networkx/","title":"networkx","text":"

NetworkX is a Python package for the creation, manipulation,and study of the structure, dynamics, and functions of complex networks.

https://pypi.python.org/pypi/networkx

"},{"location":"available_software/detail/networkx/#available-modules","title":"Available modules","text":"

The overview below shows which networkx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using networkx, load one of these modules using a module load command like:

module load networkx/3.2.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 networkx/3.2.1-gfbf-2023b x x x x x x x x x networkx/3.1-gfbf-2023a x x x x x x x x x networkx/3.0-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/nlohmann_json/","title":"nlohmann_json","text":"

JSON for Modern C++

https://github.com/nlohmann/json

"},{"location":"available_software/detail/nlohmann_json/#available-modules","title":"Available modules","text":"

The overview below shows which nlohmann_json installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nlohmann_json, load one of these modules using a module load command like:

module load nlohmann_json/3.11.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nlohmann_json/3.11.3-GCCcore-13.2.0 x x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.3.0 x x x x x x x x x nlohmann_json/3.11.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/nodejs/","title":"nodejs","text":"

Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

https://nodejs.org

"},{"location":"available_software/detail/nodejs/#available-modules","title":"Available modules","text":"

The overview below shows which nodejs installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nodejs, load one of these modules using a module load command like:

module load nodejs/20.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nodejs/20.9.0-GCCcore-13.2.0 x x x x x x x x x nodejs/18.17.1-GCCcore-12.3.0 x x x x x x x x x nodejs/18.12.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/nsync/","title":"nsync","text":"

nsync is a C library that exports various synchronization primitives, such as mutexes

https://github.com/google/nsync

"},{"location":"available_software/detail/nsync/#available-modules","title":"Available modules","text":"

The overview below shows which nsync installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using nsync, load one of these modules using a module load command like:

module load nsync/1.26.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 nsync/1.26.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/numactl/","title":"numactl","text":"

The numactl program allows you to run your application program on specific cpu's and memory nodes. It does this by supplying a NUMA memory policy to the operating system before running your program. The libnuma library provides convenient ways for you to add NUMA memory policies into your own program.

https://github.com/numactl/numactl

"},{"location":"available_software/detail/numactl/#available-modules","title":"Available modules","text":"

The overview below shows which numactl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using numactl, load one of these modules using a module load command like:

module load numactl/2.0.16-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 numactl/2.0.16-GCCcore-13.2.0 x x x x x x x x x numactl/2.0.16-GCCcore-12.3.0 x x x x x x x x x numactl/2.0.16-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/numba/","title":"numba","text":"

Numba is an Open Source NumPy-aware optimizing compiler forPython sponsored by Continuum Analytics, Inc. It uses the remarkable LLVMcompiler infrastructure to compile Python syntax to machine code.

https://numba.pydata.org/

"},{"location":"available_software/detail/numba/#available-modules","title":"Available modules","text":"

The overview below shows which numba installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using numba, load one of these modules using a module load command like:

module load numba/0.58.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 numba/0.58.1-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/numba/#numba0581-foss-2023a","title":"numba/0.58.1-foss-2023a","text":"

This is a list of extensions included in the module:

llvmlite-0.41.1, numba-0.58.1

"},{"location":"available_software/detail/occt/","title":"occt","text":"

Open CASCADE Technology (OCCT) is an object-oriented C++class library designed for rapid production of sophisticated domain-specificCAD/CAM/CAE applications.

https://www.opencascade.com/

"},{"location":"available_software/detail/occt/#available-modules","title":"Available modules","text":"

The overview below shows which occt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using occt, load one of these modules using a module load command like:

module load occt/7.8.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 occt/7.8.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/orjson/","title":"orjson","text":"

Fast, correct Python JSON library supporting dataclasses, datetimes, and numpy.

https://github.com/ijl/orjson

"},{"location":"available_software/detail/orjson/#available-modules","title":"Available modules","text":"

The overview below shows which orjson installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using orjson, load one of these modules using a module load command like:

module load orjson/3.9.15-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 orjson/3.9.15-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/orjson/#orjson3915-gcccore-1230","title":"orjson/3.9.15-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

mypy-1.10.0, mypy_extensions-1.0.0, orjson-3.9.15, ruff-0.4.8

"},{"location":"available_software/detail/parallel/","title":"parallel","text":"

parallel: Build and execute shell commands in parallel

https://savannah.gnu.org/projects/parallel/

"},{"location":"available_software/detail/parallel/#available-modules","title":"Available modules","text":"

The overview below shows which parallel installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using parallel, load one of these modules using a module load command like:

module load parallel/20230722-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 parallel/20230722-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/patchelf/","title":"patchelf","text":"

PatchELF is a small utility to modify the dynamic linker and RPATH of ELF executables.

https://github.com/NixOS/patchelf

"},{"location":"available_software/detail/patchelf/#available-modules","title":"Available modules","text":"

The overview below shows which patchelf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using patchelf, load one of these modules using a module load command like:

module load patchelf/0.18.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 patchelf/0.18.0-GCCcore-13.2.0 x x x x x x x x x patchelf/0.18.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pixman/","title":"pixman","text":"

Pixman is a low-level software library for pixel manipulation, providing features such as image compositing and trapezoid rasterization. Important users of pixman are the cairo graphics library and the X server.

http://www.pixman.org/

"},{"location":"available_software/detail/pixman/#available-modules","title":"Available modules","text":"

The overview below shows which pixman installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pixman, load one of these modules using a module load command like:

module load pixman/0.42.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pixman/0.42.2-GCCcore-13.2.0 x x x x x x x x x pixman/0.42.2-GCCcore-12.3.0 x x x x x x x x x pixman/0.42.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pkgconf/","title":"pkgconf","text":"

pkgconf is a program which helps to configure compiler and linker flags for development libraries. It is similar to pkg-config from freedesktop.org.

https://github.com/pkgconf/pkgconf

"},{"location":"available_software/detail/pkgconf/#available-modules","title":"Available modules","text":"

The overview below shows which pkgconf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pkgconf, load one of these modules using a module load command like:

module load pkgconf/2.0.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconf/2.0.3-GCCcore-13.2.0 x x x x x x x x x pkgconf/1.9.5-GCCcore-12.3.0 x x x x x x x x x pkgconf/1.9.3-GCCcore-12.2.0 x x x x x x - x x pkgconf/1.8.0 x x x x x x x x x"},{"location":"available_software/detail/pkgconfig/","title":"pkgconfig","text":"

pkgconfig is a Python module to interface with the pkg-config command line tool

https://github.com/matze/pkgconfig

"},{"location":"available_software/detail/pkgconfig/#available-modules","title":"Available modules","text":"

The overview below shows which pkgconfig installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pkgconfig, load one of these modules using a module load command like:

module load pkgconfig/1.5.5-GCCcore-12.3.0-python\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pkgconfig/1.5.5-GCCcore-12.3.0-python x x x x x x x x x pkgconfig/1.5.5-GCCcore-12.2.0-python x x x x x x - x x"},{"location":"available_software/detail/poetry/","title":"poetry","text":"

Python packaging and dependency management made easy. Poetry helps you declare, manage and install dependencies of Python projects, ensuring you have the right stack everywhere.

https://python-poetry.org

"},{"location":"available_software/detail/poetry/#available-modules","title":"Available modules","text":"

The overview below shows which poetry installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using poetry, load one of these modules using a module load command like:

module load poetry/1.6.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 poetry/1.6.1-GCCcore-13.2.0 x x x x x x x x x poetry/1.5.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/poetry/#poetry161-gcccore-1320","title":"poetry/1.6.1-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

attrs-23.1.0, build-0.10.0, cachecontrol-0.13.1, certifi-2023.7.22, charset-normalizer-3.3.1, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.6, html5lib-1.1, idna-3.4, importlib_metadata-6.8.0, installer-0.7.0, jaraco.classes-3.3.0, jeepney-0.8.0, jsonschema-4.17.3, keyring-24.2.0, lockfile-0.12.2, more-itertools-10.1.0, msgpack-1.0.7, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.11.0, poetry-1.6.1, poetry_core-1.7.0, poetry_plugin_export-1.5.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.20.0, rapidfuzz-2.15.2, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.4, six-1.16.0, tomlkit-0.12.1, urllib3-2.0.7, webencodings-0.5.1, zipp-3.17.0

"},{"location":"available_software/detail/poetry/#poetry151-gcccore-1230","title":"poetry/1.5.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

attrs-23.1.0, build-0.10.0, CacheControl-0.12.14, certifi-2023.5.7, charset-normalizer-3.1.0, cleo-2.0.1, crashtest-0.4.1, dulwich-0.21.5, html5lib-1.1, idna-3.4, importlib_metadata-6.7.0, installer-0.7.0, jaraco.classes-3.2.3, jeepney-0.8.0, jsonschema-4.17.3, keyring-23.13.1, lockfile-0.12.2, more-itertools-9.1.0, msgpack-1.0.5, pexpect-4.8.0, pkginfo-1.9.6, platformdirs-3.8.0, poetry-1.5.1, poetry_core-1.6.1, poetry_plugin_export-1.4.0, ptyprocess-0.7.0, pyproject_hooks-1.0.0, pyrsistent-0.19.3, rapidfuzz-2.15.1, requests-2.31.0, requests-toolbelt-1.0.0, SecretStorage-3.3.3, shellingham-1.5.0, six-1.16.0, tomlkit-0.11.8, urllib3-1.26.16, webencodings-0.5.1, zipp-3.15.0

"},{"location":"available_software/detail/protobuf-python/","title":"protobuf-python","text":"

Python Protocol Buffers runtime library.

https://github.com/google/protobuf/

"},{"location":"available_software/detail/protobuf-python/#available-modules","title":"Available modules","text":"

The overview below shows which protobuf-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using protobuf-python, load one of these modules using a module load command like:

module load protobuf-python/4.24.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf-python/4.24.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/protobuf/","title":"protobuf","text":"

Protocol Buffers (a.k.a., protobuf) are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data.

https://github.com/protocolbuffers/protobuf

"},{"location":"available_software/detail/protobuf/#available-modules","title":"Available modules","text":"

The overview below shows which protobuf installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using protobuf, load one of these modules using a module load command like:

module load protobuf/24.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 protobuf/24.0-GCCcore-12.3.0 x x x x x x x x x protobuf/23.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/psycopg2/","title":"psycopg2","text":"

Psycopg is the most popular PostgreSQL adapter for the Python programming language.

https://psycopg.org/

"},{"location":"available_software/detail/psycopg2/#available-modules","title":"Available modules","text":"

The overview below shows which psycopg2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using psycopg2, load one of these modules using a module load command like:

module load psycopg2/2.9.9-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 psycopg2/2.9.9-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/psycopg2/#psycopg2299-gcccore-1230","title":"psycopg2/2.9.9-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

psycopg2-2.9.9

"},{"location":"available_software/detail/pyMBE/","title":"pyMBE","text":"

pyMBE: the Python-based Molecule Builder for ESPResSopyMBE provides tools to facilitate building up molecules with complex architecturesin the Molecular Dynamics software ESPResSo.

"},{"location":"available_software/detail/pyMBE/#available-modules","title":"Available modules","text":"

The overview below shows which pyMBE installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyMBE, load one of these modules using a module load command like:

module load pyMBE/0.8.0-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyMBE/0.8.0-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/pyMBE/#pymbe080-foss-2023b","title":"pyMBE/0.8.0-foss-2023b","text":"

This is a list of extensions included in the module:

biopandas-0.5.1.dev0, looseversion-1.1.2, mmtf-python-1.1.3, Pint-Pandas-0.5, pyMBE-0.8.0

"},{"location":"available_software/detail/pybind11/","title":"pybind11","text":"

pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code.

https://pybind11.readthedocs.io

"},{"location":"available_software/detail/pybind11/#available-modules","title":"Available modules","text":"

The overview below shows which pybind11 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pybind11, load one of these modules using a module load command like:

module load pybind11/2.11.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pybind11/2.11.1-GCCcore-13.2.0 x x x x x x x x x pybind11/2.11.1-GCCcore-12.3.0 x x x x x x x x x pybind11/2.10.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pydantic/","title":"pydantic","text":"

Data validation and settings management using Python type hinting.

https://github.com/samuelcolvin/pydantic

"},{"location":"available_software/detail/pydantic/#available-modules","title":"Available modules","text":"

The overview below shows which pydantic installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pydantic, load one of these modules using a module load command like:

module load pydantic/2.7.4-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pydantic/2.7.4-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/pydantic/#pydantic274-gcccore-1320","title":"pydantic/2.7.4-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

annotated_types-0.6.0, pydantic-2.7.4, pydantic_core-2.18.4

"},{"location":"available_software/detail/pyfaidx/","title":"pyfaidx","text":"

pyfaidx: efficient pythonic random access to fasta subsequences

https://pypi.python.org/pypi/pyfaidx

"},{"location":"available_software/detail/pyfaidx/#available-modules","title":"Available modules","text":"

The overview below shows which pyfaidx installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyfaidx, load one of these modules using a module load command like:

module load pyfaidx/0.8.1.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyfaidx/0.8.1.1-GCCcore-13.2.0 x x x x x x x x x pyfaidx/0.8.1.1-GCCcore-12.3.0 x x x x x x x x x pyfaidx/0.7.2.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/pyfaidx/#pyfaidx0811-gcccore-1230","title":"pyfaidx/0.8.1.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

importlib_metadata-7.0.1, pyfaidx-0.8.1.1, zipp-3.17.0

"},{"location":"available_software/detail/pyproj/","title":"pyproj","text":"

Python interface to PROJ4 library for cartographic transformations

https://pyproj4.github.io/pyproj

"},{"location":"available_software/detail/pyproj/#available-modules","title":"Available modules","text":"

The overview below shows which pyproj installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pyproj, load one of these modules using a module load command like:

module load pyproj/3.6.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pyproj/3.6.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pystencils/","title":"pystencils","text":"

pystencils uses sympy to define stencil operations, that can be executed on numpy arrays

https://pycodegen.pages.i10git.cs.fau.de/pystencils

"},{"location":"available_software/detail/pystencils/#available-modules","title":"Available modules","text":"

The overview below shows which pystencils installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pystencils, load one of these modules using a module load command like:

module load pystencils/1.3.4-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pystencils/1.3.4-gfbf-2023b x x x x x x x x x"},{"location":"available_software/detail/pystencils/#pystencils134-gfbf-2023b","title":"pystencils/1.3.4-gfbf-2023b","text":"

This is a list of extensions included in the module:

pystencils-1.3.4

"},{"location":"available_software/detail/pytest-flakefinder/","title":"pytest-flakefinder","text":"

Runs tests multiple times to expose flakiness.

https://github.com/dropbox/pytest-flakefinder

"},{"location":"available_software/detail/pytest-flakefinder/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-flakefinder installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-flakefinder, load one of these modules using a module load command like:

module load pytest-flakefinder/1.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-flakefinder/1.1.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pytest-rerunfailures/","title":"pytest-rerunfailures","text":"

pytest plugin to re-run tests to eliminate flaky failures.

https://github.com/pytest-dev/pytest-rerunfailures

"},{"location":"available_software/detail/pytest-rerunfailures/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-rerunfailures installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-rerunfailures, load one of these modules using a module load command like:

module load pytest-rerunfailures/12.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-rerunfailures/12.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/pytest-shard/","title":"pytest-shard","text":"

pytest plugin to support parallelism across multiple machines.Shards tests based on a hash of their test name enabling easy parallelism across machines,suitable for a wide variety of continuous integration services.Tests are split at the finest level of granularity, individual test cases,enabling parallelism even if all of your tests are in a single file(or even single parameterized test method).

https://github.com/AdamGleave/pytest-shard

"},{"location":"available_software/detail/pytest-shard/#available-modules","title":"Available modules","text":"

The overview below shows which pytest-shard installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using pytest-shard, load one of these modules using a module load command like:

module load pytest-shard/0.1.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 pytest-shard/0.1.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/python-casacore/","title":"python-casacore","text":"

Python-casacore is a set of Python bindings for casacore,a c++ library used in radio astronomy. Python-casacore replaces the old pyrap.

https://casacore.github.io/python-casacore/#

"},{"location":"available_software/detail/python-casacore/#available-modules","title":"Available modules","text":"

The overview below shows which python-casacore installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-casacore, load one of these modules using a module load command like:

module load python-casacore/3.5.2-foss-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-casacore/3.5.2-foss-2023b x x x x x x x x x"},{"location":"available_software/detail/python-casacore/#python-casacore352-foss-2023b","title":"python-casacore/3.5.2-foss-2023b","text":"

This is a list of extensions included in the module:

python-casacore-3.5.2, setuptools-69.1.0

"},{"location":"available_software/detail/python-isal/","title":"python-isal","text":"

Faster zlib and gzip compatible compression and decompression by providing python bindings for the isa-l library.

https://github.com/pycompression/python-isal

"},{"location":"available_software/detail/python-isal/#available-modules","title":"Available modules","text":"

The overview below shows which python-isal installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-isal, load one of these modules using a module load command like:

module load python-isal/1.1.0-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-isal/1.1.0-GCCcore-12.3.0 x x x x x x x x x python-isal/1.1.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/python-xxhash/","title":"python-xxhash","text":"

xxhash is a Python binding for the xxHash library by Yann Collet.

https://github.com/ifduyue/python-xxhash

"},{"location":"available_software/detail/python-xxhash/#available-modules","title":"Available modules","text":"

The overview below shows which python-xxhash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using python-xxhash, load one of these modules using a module load command like:

module load python-xxhash/3.4.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 python-xxhash/3.4.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/python-xxhash/#python-xxhash341-gcccore-1230","title":"python-xxhash/3.4.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

xxhash-3.4.1

"},{"location":"available_software/detail/re2c/","title":"re2c","text":"

re2c is a free and open-source lexer generator for C and C++. Its main goal is generatingfast lexers: at least as fast as their reasonably optimized hand-coded counterparts. Instead of usingtraditional table-driven approach, re2c encodes the generated finite state automata directly in the formof conditional jumps and comparisons.

https://re2c.org

"},{"location":"available_software/detail/re2c/#available-modules","title":"Available modules","text":"

The overview below shows which re2c installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using re2c, load one of these modules using a module load command like:

module load re2c/3.1-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 re2c/3.1-GCCcore-13.2.0 x x x x x x x x x re2c/3.1-GCCcore-12.3.0 x x x x x x x x x re2c/3.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/rpy2/","title":"rpy2","text":"

rpy2 is an interface to R running embedded in a Python process.

https://rpy2.github.io

"},{"location":"available_software/detail/rpy2/#available-modules","title":"Available modules","text":"

The overview below shows which rpy2 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using rpy2, load one of these modules using a module load command like:

module load rpy2/3.5.15-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 rpy2/3.5.15-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/rpy2/#rpy23515-foss-2023a","title":"rpy2/3.5.15-foss-2023a","text":"

This is a list of extensions included in the module:

coverage-7.4.3, pytest-cov-4.1.0, rpy2-3.5.15, tzlocal-5.2

"},{"location":"available_software/detail/scikit-build-core/","title":"scikit-build-core","text":"

Scikit-build-core is a complete ground-up rewrite of scikit-build on top ofmodern packaging APIs. It provides a bridge between CMake and the Python buildsystem, allowing you to make Python modules with CMake.

https://scikit-build.readthedocs.io/en/latest/

"},{"location":"available_software/detail/scikit-build-core/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-build-core installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-build-core, load one of these modules using a module load command like:

module load scikit-build-core/0.9.3-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build-core/0.9.3-GCCcore-13.2.0 x x x x x x x x x scikit-build-core/0.9.3-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/scikit-build-core/#scikit-build-core093-gcccore-1320","title":"scikit-build-core/0.9.3-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

scikit_build_core-0.9.3

"},{"location":"available_software/detail/scikit-build-core/#scikit-build-core093-gcccore-1230","title":"scikit-build-core/0.9.3-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

pyproject-metadata-0.8.0, scikit_build_core-0.9.3

"},{"location":"available_software/detail/scikit-build/","title":"scikit-build","text":"

Scikit-Build, or skbuild, is an improved build system generatorfor CPython C/C++/Fortran/Cython extensions.

https://scikit-build.readthedocs.io/en/latest

"},{"location":"available_software/detail/scikit-build/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-build installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-build, load one of these modules using a module load command like:

module load scikit-build/0.17.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-build/0.17.6-GCCcore-13.2.0 x x x x x x x x x scikit-build/0.17.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1320","title":"scikit-build/0.17.6-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

distro-1.8.0, packaging-23.1, scikit_build-0.17.6

"},{"location":"available_software/detail/scikit-build/#scikit-build0176-gcccore-1230","title":"scikit-build/0.17.6-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

distro-1.8.0, packaging-23.1, scikit_build-0.17.6

"},{"location":"available_software/detail/scikit-learn/","title":"scikit-learn","text":"

Scikit-learn integrates machine learning algorithms in the tightly-knit scientific Python world,building upon numpy, scipy, and matplotlib. As a machine-learning module,it provides versatile tools for data mining and analysis in any field of science and engineering.It strives to be simple and efficient, accessible to everybody, and reusable in various contexts.

https://scikit-learn.org/stable/index.html

"},{"location":"available_software/detail/scikit-learn/#available-modules","title":"Available modules","text":"

The overview below shows which scikit-learn installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using scikit-learn, load one of these modules using a module load command like:

module load scikit-learn/1.4.0-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 scikit-learn/1.4.0-gfbf-2023b x x x x x x x x x scikit-learn/1.3.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/scikit-learn/#scikit-learn140-gfbf-2023b","title":"scikit-learn/1.4.0-gfbf-2023b","text":"

This is a list of extensions included in the module:

scikit-learn-1.4.0, sklearn-0.0

"},{"location":"available_software/detail/scikit-learn/#scikit-learn131-gfbf-2023a","title":"scikit-learn/1.3.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

scikit-learn-1.3.1, sklearn-0.0

"},{"location":"available_software/detail/setuptools-rust/","title":"setuptools-rust","text":"

setuptools-rust is a plugin for setuptools to build Rust Python extensionsimplemented with PyO3 or rust-cpython.

https://github.com/PyO3/setuptools-rust

"},{"location":"available_software/detail/setuptools-rust/#available-modules","title":"Available modules","text":"

The overview below shows which setuptools-rust installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using setuptools-rust, load one of these modules using a module load command like:

module load setuptools-rust/1.8.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools-rust/1.8.0-GCCcore-13.2.0 x x x x x x x x x setuptools-rust/1.6.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust180-gcccore-1320","title":"setuptools-rust/1.8.0-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

semantic_version-2.10.0, setuptools-rust-1.8.0, typing_extensions-4.8.0

"},{"location":"available_software/detail/setuptools-rust/#setuptools-rust160-gcccore-1230","title":"setuptools-rust/1.6.0-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

semantic_version-2.10.0, setuptools-rust-1.6.0, typing_extensions-4.6.3

"},{"location":"available_software/detail/setuptools/","title":"setuptools","text":"

Easily download, build, install, upgrade, and uninstall Python packages

https://pypi.org/project/setuptools

"},{"location":"available_software/detail/setuptools/#available-modules","title":"Available modules","text":"

The overview below shows which setuptools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using setuptools, load one of these modules using a module load command like:

module load setuptools/64.0.3-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 setuptools/64.0.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/siscone/","title":"siscone","text":"

Hadron Seedless Infrared-Safe Cone jet algorithm

https://siscone.hepforge.org/

"},{"location":"available_software/detail/siscone/#available-modules","title":"Available modules","text":"

The overview below shows which siscone installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using siscone, load one of these modules using a module load command like:

module load siscone/3.0.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 siscone/3.0.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/snakemake/","title":"snakemake","text":"

The Snakemake workflow management system is a tool to create reproducible and scalable data analyses.

https://snakemake.readthedocs.io

"},{"location":"available_software/detail/snakemake/#available-modules","title":"Available modules","text":"

The overview below shows which snakemake installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using snakemake, load one of these modules using a module load command like:

module load snakemake/8.4.2-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 snakemake/8.4.2-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/snakemake/#snakemake842-foss-2023a","title":"snakemake/8.4.2-foss-2023a","text":"

This is a list of extensions included in the module:

argparse-dataclass-2.0.0, conda-inject-1.3.1, ConfigArgParse-1.7, connection-pool-0.0.3, datrie-0.8.2, dpath-2.1.6, fastjsonschema-2.19.1, humanfriendly-10.0, immutables-0.20, jupyter-core-5.7.1, nbformat-5.9.2, plac-1.4.2, reretry-0.11.8, smart-open-6.4.0, snakemake-8.4.2, snakemake-executor-plugin-cluster-generic-1.0.7, snakemake-executor-plugin-cluster-sync-0.1.3, snakemake-executor-plugin-flux-0.1.0, snakemake-executor-plugin-slurm-0.2.1, snakemake-executor-plugin-slurm-jobstep-0.1.10, snakemake-interface-common-1.15.2, snakemake-interface-executor-plugins-8.2.0, snakemake-interface-storage-plugins-3.0.0, stopit-1.1.2, throttler-1.2.2, toposort-1.10, yte-1.5.4

"},{"location":"available_software/detail/snappy/","title":"snappy","text":"

Snappy is a compression/decompression library. It does not aimfor maximum compression, or compatibility with any other compression library;instead, it aims for very high speeds and reasonable compression.

https://github.com/google/snappy

"},{"location":"available_software/detail/snappy/#available-modules","title":"Available modules","text":"

The overview below shows which snappy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using snappy, load one of these modules using a module load command like:

module load snappy/1.1.10-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 snappy/1.1.10-GCCcore-13.2.0 x x x x x x x x x snappy/1.1.10-GCCcore-12.3.0 x x x x x x x x x snappy/1.1.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/spglib-python/","title":"spglib-python","text":"

Spglib for Python. Spglib is a library for finding and handling crystal symmetries written in C.

https://pypi.python.org/pypi/spglib

"},{"location":"available_software/detail/spglib-python/#available-modules","title":"Available modules","text":"

The overview below shows which spglib-python installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using spglib-python, load one of these modules using a module load command like:

module load spglib-python/2.0.2-gfbf-2022b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 spglib-python/2.0.2-gfbf-2022b x x x x x x - x x"},{"location":"available_software/detail/statsmodels/","title":"statsmodels","text":"

Statsmodels is a Python module that allows users to explore data, estimate statistical models,and perform statistical tests.

https://www.statsmodels.org/

"},{"location":"available_software/detail/statsmodels/#available-modules","title":"Available modules","text":"

The overview below shows which statsmodels installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using statsmodels, load one of these modules using a module load command like:

module load statsmodels/0.14.1-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 statsmodels/0.14.1-gfbf-2023b x x x x x x x x x statsmodels/0.14.1-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/statsmodels/#statsmodels0141-gfbf-2023b","title":"statsmodels/0.14.1-gfbf-2023b","text":"

This is a list of extensions included in the module:

patsy-0.5.6, statsmodels-0.14.1

"},{"location":"available_software/detail/statsmodels/#statsmodels0141-gfbf-2023a","title":"statsmodels/0.14.1-gfbf-2023a","text":"

This is a list of extensions included in the module:

patsy-0.5.6, statsmodels-0.14.1

"},{"location":"available_software/detail/sympy/","title":"sympy","text":"

SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python and does not require any external libraries.

https://sympy.org/

"},{"location":"available_software/detail/sympy/#available-modules","title":"Available modules","text":"

The overview below shows which sympy installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using sympy, load one of these modules using a module load command like:

module load sympy/1.12-gfbf-2023b\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 sympy/1.12-gfbf-2023b x x x x x x x x x sympy/1.12-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/tbb/","title":"tbb","text":"

Intel(R) Threading Building Blocks (Intel(R) TBB) lets you easily write parallel C++ programs that take full advantage of multicore performance, that are portable, composable and have future-proof scalability.

https://github.com/oneapi-src/oneTBB

"},{"location":"available_software/detail/tbb/#available-modules","title":"Available modules","text":"

The overview below shows which tbb installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tbb, load one of these modules using a module load command like:

module load tbb/2021.13.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tbb/2021.13.0-GCCcore-13.2.0 - - - x x x x x x tbb/2021.11.0-GCCcore-12.3.0 x x x x x x x x x tbb/2021.10.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/tcsh/","title":"tcsh","text":"

Tcsh is an enhanced, but completely compatible version of the Berkeley UNIX C shell (csh). It is a command language interpreter usable both as an interactive login shell and a shell script command processor. It includes a command-line editor, programmable word completion, spelling correction, a history mechanism, job control and a C-like syntax.

https://www.tcsh.org

"},{"location":"available_software/detail/tcsh/#available-modules","title":"Available modules","text":"

The overview below shows which tcsh installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tcsh, load one of these modules using a module load command like:

module load tcsh/6.24.07-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tcsh/6.24.07-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/time/","title":"time","text":"

The `time' command runs another program, then displays information about the resources used by that program, collected by the system while the program was running.

https://www.gnu.org/software/time/

"},{"location":"available_software/detail/time/#available-modules","title":"Available modules","text":"

The overview below shows which time installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using time, load one of these modules using a module load command like:

module load time/1.9-GCCcore-12.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 time/1.9-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/tmux/","title":"tmux","text":"

tmux is a terminal multiplexer: it enables a number ofterminals to be created, accessed, and controlled from a single screen. tmuxmay be detached from a screen and continue running in the background, thenlater reattached.

https://github.com/tmux/tmux/

"},{"location":"available_software/detail/tmux/#available-modules","title":"Available modules","text":"

The overview below shows which tmux installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tmux, load one of these modules using a module load command like:

module load tmux/3.3a-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tmux/3.3a-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/tornado/","title":"tornado","text":"

Tornado is a Python web framework and asynchronous networking library.

https://github.com/tornadoweb/tornado

"},{"location":"available_software/detail/tornado/#available-modules","title":"Available modules","text":"

The overview below shows which tornado installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tornado, load one of these modules using a module load command like:

module load tornado/6.3.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tornado/6.3.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/tqdm/","title":"tqdm","text":"

A fast, extensible progress bar for Python and CLI

https://github.com/tqdm/tqdm

"},{"location":"available_software/detail/tqdm/#available-modules","title":"Available modules","text":"

The overview below shows which tqdm installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using tqdm, load one of these modules using a module load command like:

module load tqdm/4.66.2-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 tqdm/4.66.2-GCCcore-13.2.0 x x x x x x x x x tqdm/4.66.1-GCCcore-12.3.0 x x x x x x x x x tqdm/4.64.1-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/typing-extensions/","title":"typing-extensions","text":"

Typing Extensions - Backported and Experimental Type Hints for Python

https://github.com/python/typing_extensions

"},{"location":"available_software/detail/typing-extensions/#available-modules","title":"Available modules","text":"

The overview below shows which typing-extensions installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using typing-extensions, load one of these modules using a module load command like:

module load typing-extensions/4.10.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 typing-extensions/4.10.0-GCCcore-13.2.0 x x x x x x x x x typing-extensions/4.9.0-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/unixODBC/","title":"unixODBC","text":"

unixODBC provides a uniform interface betweenapplication and database driver

https://www.unixodbc.org

"},{"location":"available_software/detail/unixODBC/#available-modules","title":"Available modules","text":"

The overview below shows which unixODBC installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using unixODBC, load one of these modules using a module load command like:

module load unixODBC/2.3.12-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 unixODBC/2.3.12-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/utf8proc/","title":"utf8proc","text":"

utf8proc is a small, clean C library that provides Unicode normalization, case-folding, and other operations for data in the UTF-8 encoding.

https://github.com/JuliaStrings/utf8proc

"},{"location":"available_software/detail/utf8proc/#available-modules","title":"Available modules","text":"

The overview below shows which utf8proc installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using utf8proc, load one of these modules using a module load command like:

module load utf8proc/2.9.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 utf8proc/2.9.0-GCCcore-13.2.0 x x x x x x x x x utf8proc/2.8.0-GCCcore-12.3.0 x x x x x x x x x utf8proc/2.8.0-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/virtualenv/","title":"virtualenv","text":"

A tool for creating isolated virtual python environments.

https://github.com/pypa/virtualenv

"},{"location":"available_software/detail/virtualenv/#available-modules","title":"Available modules","text":"

The overview below shows which virtualenv installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using virtualenv, load one of these modules using a module load command like:

module load virtualenv/20.24.6-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 virtualenv/20.24.6-GCCcore-13.2.0 x x x x x x x x x virtualenv/20.23.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/virtualenv/#virtualenv20246-gcccore-1320","title":"virtualenv/20.24.6-GCCcore-13.2.0","text":"

This is a list of extensions included in the module:

distlib-0.3.7, filelock-3.13.0, platformdirs-3.11.0, virtualenv-20.24.6

"},{"location":"available_software/detail/virtualenv/#virtualenv20231-gcccore-1230","title":"virtualenv/20.23.1-GCCcore-12.3.0","text":"

This is a list of extensions included in the module:

distlib-0.3.6, filelock-3.12.2, platformdirs-3.8.0, virtualenv-20.23.1

"},{"location":"available_software/detail/waLBerla/","title":"waLBerla","text":"

Widely applicable Lattics-Boltzmann from Erlangen is a block-structured high-performance framework for multiphysics simulations

https://walberla.net/index.html

"},{"location":"available_software/detail/waLBerla/#available-modules","title":"Available modules","text":"

The overview below shows which waLBerla installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using waLBerla, load one of these modules using a module load command like:

module load waLBerla/6.1-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 waLBerla/6.1-foss-2023a x x x x x x x x x waLBerla/6.1-foss-2022b x x x x x x - x x"},{"location":"available_software/detail/wget/","title":"wget","text":"

GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc.

https://www.gnu.org/software/wget

"},{"location":"available_software/detail/wget/#available-modules","title":"Available modules","text":"

The overview below shows which wget installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wget, load one of these modules using a module load command like:

module load wget/1.24.5-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wget/1.24.5-GCCcore-12.3.0 x x x x x x x x x wget/1.21.4-GCCcore-13.2.0 x x x x x x x x x"},{"location":"available_software/detail/wradlib/","title":"wradlib","text":"

The wradlib project has been initiated in order to facilitate the use of weatherradar data as well as to provide a common platform for research on newalgorithms.

https://docs.wradlib.org/

"},{"location":"available_software/detail/wradlib/#available-modules","title":"Available modules","text":"

The overview below shows which wradlib installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wradlib, load one of these modules using a module load command like:

module load wradlib/2.0.3-foss-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wradlib/2.0.3-foss-2023a x x x x x x x x x"},{"location":"available_software/detail/wradlib/#wradlib203-foss-2023a","title":"wradlib/2.0.3-foss-2023a","text":"

This is a list of extensions included in the module:

cmweather-0.3.2, deprecation-2.1.0, lat_lon_parser-1.3.0, wradlib-2.0.3, xarray-datatree-0.0.13, xmltodict-0.13.0, xradar-0.5.1

"},{"location":"available_software/detail/wrapt/","title":"wrapt","text":"

The aim of the wrapt module is to provide a transparent objectproxy for Python, which can be used as the basis for the construction offunction wrappers and decorator functions.

https://pypi.org/project/wrapt/

"},{"location":"available_software/detail/wrapt/#available-modules","title":"Available modules","text":"

The overview below shows which wrapt installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wrapt, load one of these modules using a module load command like:

module load wrapt/1.15.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wrapt/1.15.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/wrapt/#wrapt1150-gfbf-2023a","title":"wrapt/1.15.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

wrapt-1.15.0

"},{"location":"available_software/detail/wxWidgets/","title":"wxWidgets","text":"

wxWidgets is a C++ library that lets developers createapplications for Windows, Mac OS X, Linux and other platforms with asingle code base. It has popular language bindings for Python, Perl,Ruby and many other languages, and unlike other cross-platform toolkits,wxWidgets gives applications a truly native look and feel because ituses the platform's native API rather than emulating the GUI.

https://www.wxwidgets.org

"},{"location":"available_software/detail/wxWidgets/#available-modules","title":"Available modules","text":"

The overview below shows which wxWidgets installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using wxWidgets, load one of these modules using a module load command like:

module load wxWidgets/3.2.6-GCC-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 wxWidgets/3.2.6-GCC-13.2.0 x x x x x x x x x wxWidgets/3.2.2.1-GCC-12.3.0 x x x x x x x x x wxWidgets/3.2.2.1-GCC-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/x264/","title":"x264","text":"

x264 is a free software library and application for encoding video streams into the H.264/MPEG-4 AVC compression format, and is released under the terms of the GNU GPL.

https://www.videolan.org/developers/x264.html

"},{"location":"available_software/detail/x264/#available-modules","title":"Available modules","text":"

The overview below shows which x264 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using x264, load one of these modules using a module load command like:

module load x264/20231019-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 x264/20231019-GCCcore-13.2.0 x x x x x x x x x x264/20230226-GCCcore-12.3.0 x x x x x x x x x x264/20230226-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/x265/","title":"x265","text":"

x265 is a free software library and application for encoding video streams into the H.265 AVC compression format, and is released under the terms of the GNU GPL.

https://x265.org/

"},{"location":"available_software/detail/x265/#available-modules","title":"Available modules","text":"

The overview below shows which x265 installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using x265, load one of these modules using a module load command like:

module load x265/3.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 x265/3.5-GCCcore-13.2.0 x x x x x x x x x x265/3.5-GCCcore-12.3.0 x x x x x x x x x x265/3.5-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/xarray/","title":"xarray","text":"

xarray (formerly xray) is an open source project and Python package that aims to bring the labeled data power of pandas to the physical sciences, by providing N-dimensional variants of the core pandas data structures.

https://github.com/pydata/xarray

"},{"location":"available_software/detail/xarray/#available-modules","title":"Available modules","text":"

The overview below shows which xarray installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xarray, load one of these modules using a module load command like:

module load xarray/2023.9.0-gfbf-2023a\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xarray/2023.9.0-gfbf-2023a x x x x x x x x x"},{"location":"available_software/detail/xarray/#xarray202390-gfbf-2023a","title":"xarray/2023.9.0-gfbf-2023a","text":"

This is a list of extensions included in the module:

xarray-2023.9.0

"},{"location":"available_software/detail/xorg-macros/","title":"xorg-macros","text":"

X.org macros utilities.

https://gitlab.freedesktop.org/xorg/util/macros

"},{"location":"available_software/detail/xorg-macros/#available-modules","title":"Available modules","text":"

The overview below shows which xorg-macros installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xorg-macros, load one of these modules using a module load command like:

module load xorg-macros/1.20.0-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xorg-macros/1.20.0-GCCcore-13.2.0 x x x x x x x x x xorg-macros/1.20.0-GCCcore-12.3.0 x x x x x x x x x xorg-macros/1.19.3-GCCcore-12.2.0 x x x x x x - x x"},{"location":"available_software/detail/xprop/","title":"xprop","text":"

The xprop utility is for displaying window and font properties in an X server. One window or font is selected using the command line arguments or possibly in the case of a window, by clicking on the desired window. A list of properties is then given, possibly with formatting information.

https://www.x.org/wiki/

"},{"location":"available_software/detail/xprop/#available-modules","title":"Available modules","text":"

The overview below shows which xprop installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xprop, load one of these modules using a module load command like:

module load xprop/1.2.6-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xprop/1.2.6-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/xxHash/","title":"xxHash","text":"

xxHash is an extremely fast non-cryptographic hash algorithm, working at RAM speed limit.

https://cyan4973.github.io/xxHash

"},{"location":"available_software/detail/xxHash/#available-modules","title":"Available modules","text":"

The overview below shows which xxHash installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xxHash, load one of these modules using a module load command like:

module load xxHash/0.8.2-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxHash/0.8.2-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/xxd/","title":"xxd","text":"

xxd is part of the VIM package and this will only install xxd, not vim!xxd converts to/from hexdumps of binary files.

https://www.vim.org

"},{"location":"available_software/detail/xxd/#available-modules","title":"Available modules","text":"

The overview below shows which xxd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using xxd, load one of these modules using a module load command like:

module load xxd/9.1.0307-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 xxd/9.1.0307-GCCcore-13.2.0 x x x x x x x x x xxd/9.0.2112-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yell/","title":"yell","text":"

Yell - Your Extensible Logging Library is a comprehensive logging replacement for Ruby.

https://github.com/rudionrails/yell

"},{"location":"available_software/detail/yell/#available-modules","title":"Available modules","text":"

The overview below shows which yell installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yell, load one of these modules using a module load command like:

module load yell/2.2.2-GCC-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yell/2.2.2-GCC-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yelp-tools/","title":"yelp-tools","text":"

yelp-tools is a collection of scripts and build utilities to help create,manage, and publish documentation for Yelp and the web. Most of the heavylifting is done by packages like yelp-xsl and itstool. This package justwraps things up in a developer-friendly way.

https://gitlab.gnome.org/GNOME/yelp-tools

"},{"location":"available_software/detail/yelp-tools/#available-modules","title":"Available modules","text":"

The overview below shows which yelp-tools installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yelp-tools, load one of these modules using a module load command like:

module load yelp-tools/42.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yelp-tools/42.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/yelp-xsl/","title":"yelp-xsl","text":"

yelp-xsl is a collection of programs and data files to help you build, maintain, and distribute documentation. It provides XSLT stylesheets that can be built upon for help viewers and publishing systems. These stylesheets output JavaScript and CSS content, and reference images provided by yelp-xsl. This package also redistributes copies of the jQuery and jQuery.Syntax JavaScript libraries.

https://gitlab.gnome.org/GNOME/yelp-xslg

"},{"location":"available_software/detail/yelp-xsl/#available-modules","title":"Available modules","text":"

The overview below shows which yelp-xsl installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using yelp-xsl, load one of these modules using a module load command like:

module load yelp-xsl/42.1-GCCcore-12.3.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 yelp-xsl/42.1-GCCcore-12.3.0 x x x x x x x x x"},{"location":"available_software/detail/zstd/","title":"zstd","text":"

Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression/speed trade-off, while being backed by a very fast decoder. It also offers a special mode for small data, called dictionary compression, and can create dictionaries from any sample set.

https://facebook.github.io/zstd

"},{"location":"available_software/detail/zstd/#available-modules","title":"Available modules","text":"

The overview below shows which zstd installations are available per target architecture in EESSI, ordered based on software version (new to old).

To start using zstd, load one of these modules using a module load command like:

module load zstd/1.5.5-GCCcore-13.2.0\n

(This data was automatically generated on Tue, 10 Dec 2024 at 01:48:14 UTC)

aarch64/generic aarch64/neoverse_n1 aarch64/neoverse_v1 x86_64/generic x86_64/amd/zen2 x86_64/amd/zen3 x86_64/amd/zen4 x86_64/intel/haswell x86_64/intel/skylake_avx512 zstd/1.5.5-GCCcore-13.2.0 x x x x x x x x x zstd/1.5.5-GCCcore-12.3.0 x x x x x x x x x zstd/1.5.2-GCCcore-12.2.0 x x x x x x - x x"},{"location":"blog/","title":"Blog","text":""},{"location":"blog/2024/05/17/isc24/","title":"EESSI promo tour @ ISC'24 (May 2024, Hamburg)","text":"

This week, we had the privilege of attending the ISC'24 conference in the beautiful city of Hamburg, Germany. This was an excellent opportunity for us to showcase EESSI, and gain valuable insights and feedback from the HPC community.

"},{"location":"blog/2024/05/17/isc24/#bof-session-on-eessi","title":"BoF session on EESSI","text":"

The EESSI Birds-of-a-Feather (BoF) session on Tuesday morning, part of the official ISC'24 program, was the highlight of our activities in Hamburg.

It was well attended, with well over 100 people joining us at 9am.

During this session, we introduced the EESSI project with a short presentation, followed by a well-received live hands-on demo of installing and using EESSI by spinning up an \"empty\" Linux virtual machine instance in Amazon EC2 and getting optimized installations of popular scientific applications like GROMACS and TensorFlow running in a matter of minutes.

During the second part of the BoF session, we engaged with the audience through an interactive poll and by letting attendees ask questions.

The presentation slides, including the results of the interactive poll and questions that were raised by attendees, are available here.

"},{"location":"blog/2024/05/17/isc24/#workshops","title":"Workshops","text":"

During the last day of ISC'24, EESSI was present in no less than three different workshops.

"},{"location":"blog/2024/05/17/isc24/#risc-v-workshop","title":"RISC-V workshop","text":"

At the Fourth International workshop on RISC-V for HPC, Juli\u00e1n Morillo (BSC) presented our paper \"Preparing to Hit the Ground Running: Adding RISC-V support to EESSI\" (slides available here).

Juli\u00e1n covered the initial work that was done in the scope of the MultiXscale EuroHPC Centre-of-Excellence to add support for RISC-V to EESSI, outlined the challenges we encountered, and shared the lessons we have learned along the way.

"},{"location":"blog/2024/05/17/isc24/#ahug-workshop","title":"AHUG workshop","text":"

During the Arm HPC User Group (AHUG) workshop, Kenneth Hoste (HPC-UGent) gave a talk entitled \"Extending Arm\u2019s Reach by Going EESSI\" (slides available here).

Next to a high-level introduction to EESSI, we briefly covered some of the challenges we encountered when testing the optimized software installations that we had built for the Arm Neoverse V1 microarchitecture, including bugs in OpenMPI and GROMACS.

Kenneth gave a live demonstration of how to get access to EESSI and start running the optimized software installations we provide through our CernVM-FS repository on a fresh AWS Graviton 3 instance in a matter of minutes.

"},{"location":"blog/2024/05/17/isc24/#pop-workshop","title":"POP workshop","text":"

In the afternoon on Thursday, Lara Peeters (HPC-UGent) presented MultiXscale during the Readiness of HPC Extreme-scale Applications workshop, which was organised by the POP EuroHPC Centre-of-Excellence (slides available here).

Lara outlined the pilot use cases on which MultiXscale focuses, and explained how EESSI helps to achieve the goals of MultiXscale in terms of Productivity, Performance, and Portability.

At the end of the workshop, a group picture was taken with both organisers and speakers, which was a great way to wrap up a busy week in Hamburg!

"},{"location":"blog/2024/05/17/isc24/#talks-and-demos-on-eessi-at-exhibit","title":"Talks and demos on EESSI at exhibit","text":"

Not only was EESSI part of the official ISC'24 program via a dedicated BoF session and various workshops: we were also prominently present on the exhibit floor.

"},{"location":"blog/2024/05/17/isc24/#microsoft-azure-booth","title":"Microsoft Azure booth","text":"

Microsoft Azure invited us to give a 1-hour introductory presentation on EESSI on both Monday and Wednesday at their booth during the ISC'24 exhibit, as well as to provide live demonstrations at the demo corner of their booth on Tuesday afternoon on how to get access to EESSI and the user experience it provides.

Exhibit attendees were welcome to pass by and ask questions, and did so throughout the full 4 hours we were present there.

Both Microsoft Azure and AWS have been graciously providing resources in their cloud infrastructure free-of-cost for developing, testing, and demonstrating EESSI for several years now.

"},{"location":"blog/2024/05/17/isc24/#eurohpc-booth","title":"EuroHPC booth","text":"

The MultiXscale EuroHPC Centre-of-Excellence we are actively involved in, and through which the development of EESSI is being co-funded since Jan'23, was invited by the EuroHPC JU to present the goals and preliminary achievements at their booth.

Elisabeth Ortega (HPCNow!) did the honours to give the last talk at the EuroHPC JU booth of the ISC'24 exhibit.

"},{"location":"blog/2024/05/17/isc24/#stickers","title":"Stickers!","text":"

Last but not least: we handed out a boatload free stickers with the logo of both MultiXscale and EESSI itself, as well as of various of the open source software projects we leverage, including EasyBuild, Lmod, and CernVM-FS.

We have mostly exhausted our sticker collection during ISC'24, but don't worry: we will make sure we have more available at upcoming events...

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/","title":"Portable test run of ESPResSo on EuroHPC systems via EESSI","text":"

Since 14 June 2024, ESPResSo v4.2.2 is available in the EESSI production repository software.eessi.io, optimized for the 8 CPU targets that are fully supported by version 2023.06 of EESSI. This allows running ESPResSo effortlessly on the EuroHPC systems where EESSI is already available, like Vega and Karolina.

On 27 June 2024, an additional installation of ESPResSo v4.2.2 that is optimized for Arm A64FX processors was added, which enables also running ESPResSo efficiently on Deucalion, even though EESSI is not available yet system-wide on Deucalion (see below for more details).

With the portable test for ESPResSo that is available in the EESSI test suite we can easily evaluate the scalability of ESPResSo across EuroHPC systems, even if those systems have different system architectures.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#simulating-lennard-jones-fluids-using-espresso","title":"Simulating Lennard-Jones fluids using ESPResSo","text":"

Lennard-Jones fluids model interacting soft spheres with a potential that is weakly attractive at medium range and strongly repulsive at short range. Originally designed to model noble gases, this simple setup now underpins most particle-based simulations, such as ionic liquids, polymers, proteins and colloids, where strongly repulsive pairwise potentials are desirable to prevent particles from overlapping with one another. In addition, solvated systems with atomistic resolution typically have a large excess of solvent atoms compared to solute atoms, thus Lennard-Jones interactions tend to account for a large portion of the simulation time. Compared to other potentials, the Lennard-Jones interaction is inexpensive to calculate, and its limited range allows us to partition the simulation domain into arbitrarily small regions that can be distributed among many processors.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#portable-test-to-evaluate-performance-of-espresso","title":"Portable test to evaluate performance of ESPResSo","text":"

To evaluate the performance of ESPResSo, we have implemented a portable test for ESPResSo in the EESSI test suite; the results shown here were collected using version 0.3.2.

After installing and configuring the EESSI test suite on Vega, Karolina, and Deucalion, running the Lennard-Jones (LJ) test case with ESPResSo 4.2.2 available in EESSI can be done with:

reframe --name \"ESPRESSO_LJ.*%module_name=ESPResSo/4.2.2\"\n

This will automatically run the LJ test case with ESPResSo across all known scales in the EESSI test suite, which range from single core up to 8 full nodes.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#performance-scalability-results-on-vega-karolina-deucalion","title":"Performance + scalability results on Vega, Karolina, Deucalion","text":"

The performance results of the tests are collected by ReFrame in a detailed JSON report.

The parallel performance of ESPResSo, expressed in particles integrated per second, scales linearly with the number of cores. On Vega using 8 nodes (1024 MPI ranks, one per physical core), ESPResSo 4.2.2 can integrate the equations of motion of roughly 615 million particles every second. On Deucalion using 8 nodes (384 cores), we observe a performance of roughly 62 million particles integrated per second.

Plotting the parallel efficiency of ESPResSo 4.2.2 (weak scaling, 2000 particles per MPI rank) on the three EuroHPC systems we used shows that it decreases approximately linearly with the logarithm of the number of cores.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#running-espresso-on-deucalion-via-eessi-cvmfsexec","title":"Running ESPResSo on Deucalion via EESSI + cvmfsexec","text":"

While EESSI is already available system-wide on both Vega and Karolina for some time (see here and here for more information, respectively), it was not available yet on Deucalion when these performance experiments were run.

Nevertheless, we were able to leverage the optimized installation of ESPResSo for A64FX that is available in EESSI since 27 June 2024, by leveraging the cvmfsexec tool, and by creatively implementing two simple shell wrapper scripts.

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#cvmfsexec-wrapper-script","title":"cvmfsexec wrapper script","text":"

The first wrapper script cvmfsexec_eessi.sh can be used to run a command in a subshell in which the EESSI CernVM-FS repository (software.eessi.io) is mounted via cvmfsexec. This script can be used by regular users on Deucalion, it does not require any special privileges beyond the Linux kernel features that cvmfsexec leverages, like user namespaces.

Contents of ~/bin/cvmfsexec_eessi.sh:

#!/bin/bash\nif [ -d /cvmfs/software.eessi.io ]; then\n    # run command directly, EESSI CernVM-FS repository is already mounted\n    \"$@\"\nelse\n    # run command via in subshell where EESSI CernVM-FS repository is mounted,\n    # via cvmfsexec which is set up in a unique temporary directory\n    orig_workdir=$(pwd)\n    mkdir -p /tmp/$USER\n    tmpdir=$(mktemp -p /tmp/$USER -d)\n    cd $tmpdir\n    git clone https://github.com/cvmfs/cvmfsexec.git > $tmpdir/git_clone.out 2>&1\n    cd cvmfsexec\n    ./makedist default > $tmpdir/cvmfsexec_makedist.out 2>&1\n    cd $orig_workdir\n    $tmpdir/cvmfsexec/cvmfsexec software.eessi.io -- \"$@\"\n    # cleanup\n    rm -rf $tmpdir\nfi\n

Do make sure that this script is executable:

chmod u+x ~/bin/cvmfsexec_eessi.sh\n

A simple way to test this script is to use it to inspect the contents of the EESSI repository:

~/bin/cvmfsexec_eessi.sh ls /cvmfs/software.eessi.io\n

or to start an interactive shell in which the EESSI repository is mounted:

~/bin/cvmfsexec_eessi.sh /bin/bash -l\n

The job scripts that were submitted by ReFrame on Deucalion leverage cvmfsexec_eessi.sh to set up the environment and get access to the ESPResSo v4.2.2 installation that is available in EESSI (see below).

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#orted-wrapper-script","title":"orted wrapper script","text":"

In order to get multi-node runs of ESPResSo working without having EESSI available system-wide, we also had to create a small wrapper script for the orted command that is used by Open MPI to start processes on remote nodes. This is necessary because mpirun launches orted, which must be run in an environment in which the EESSI repository is mounted. If not, MPI startup will fail with an error like \"error: execve(): orted: No such file or directory\".

This wrapper script must be named orted, and must be located in a path that is listed in $PATH.

We placed it in ~/bin/orted, and add export PATH=$HOME/bin:$PATH to our ~/.bashrc login script.

Contents of ~/bin/orted:

#!/bin/bash\n\n# first remove path to this orted wrapper from $PATH, to avoid infinite loop\norted_wrapper_dir=$(dirname $0)\nexport PATH=$(echo $PATH | tr ':' '\\n' | grep -v $orted_wrapper_dir | tr '\\n' ':')\n\n~/bin/cvmfsexec_eessi.sh orted \"$@\"\n

Do make sure that also this orted wrapper script is executable:

chmod u+x ~/bin/orted\n

If not, you will likely run into an error that starts with:

An ORTE daemon has unexpectedly failed after launch ...\n

"},{"location":"blog/2024/06/28/espresso-portable-test-run-eurohpc/#slurm-job-script","title":"Slurm job script","text":"

We can use the cvmfsexec_eessi.sh script in a Slurm job script on Deucalion to initialize the EESSI environment in a subshell in which the EESSI CernVM-FS repository is mounted, and subsequently load the module for ESPResSo v4.2.2 and launch the Lennard-Jones fluid simulation via mpirun:

Job script (example using 2 full 48-core nodes on A64FX partition of Deucalion):

#!/bin/bash\n#SBATCH --ntasks=96\n#SBATCH --ntasks-per-node=48\n#SBATCH --cpus-per-task=1\n#SBATCH --time=5:0:0\n#SBATCH --partition normal-arm\n#SBATCH --export=None\n#SBATCH --mem=30000M\n~/bin/cvmfsexec_eessi.sh << EOF\nexport EESSI_SOFTWARE_SUBDIR_OVERRIDE=aarch64/a64fx\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ESPResSo/4.2.2-foss-2023a\nexport SLURM_EXPORT_ENV=HOME,PATH,LD_LIBRARY_PATH,PYTHONPATH\nmpirun -np 96 python3 lj.py\nEOF\n

(the lj.py Python script is available in the EESSI test suite, see here)

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/","title":"Extrae available in EESSI","text":"

Thanks to the work developed under MultiXscale CoE we are proud to announce that as of 22 July 2024, Extrae v4.2.0 is available in the EESSI production repository software.eessi.io, optimized for the 8 CPU targets that are fully supported by version 2023.06 of EESSI. This allows using Extrae effortlessly on the EuroHPC systems where EESSI is already available, like Vega and Karolina.

It is worth noting that from that date Extrae is also available in the EESSI RISC-V repository risv.eessi.io.

Extrae is a package developed at BSC devoted to generate Paraver trace-files for a post-mortem analysis of applications performance. Extrae is a tool that uses different interposition mechanisms to inject probes into the target application so as to gather information regarding the application performance. It is one of the tools used in the POP3 CoE.

The work to incorporate Extrae into EESSI started early in May. It took quite some time and effort but has resulted in a number of updates, improvements and bug fixes for Extrae. The following sections explain the work done describing the issues encountered and the solutions adopted.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#adapting-eessi-software-layer","title":"Adapting EESSI software layer","text":"

During the first attempt to build Extrae (in this case v4.0.6) in the EESSI context, we found out two issues:

  1. the configure script of Extrae was not able to find binutils in the location it is provided by the compat layer of EESSI, and
  2. the configure/make files of Extrae make use of which command that does not work in our build container.

Both problems were solved by adding a pre_configure_hook in the eb_hooks.py file of the EESSI software layer that:

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#moving-to-version-416","title":"Moving to version 4.1.6","text":"

By the time we completed this work, v4.1.6 of Extrae was available so we decided to switch to that version as v4.0.6 was throwing errors in the test suite provided by Extrae through the make checkcommand.

When first trying to build this new version, we noticed that there were still problems with binutils detection because the configure scripts of Extrae assume that the binutils libraries are under a lib directory in the provided binutils path while in the EESSI compat layer they are directly in the provided directory (i.e. without the /lib). This was solved with a patch file committed to the EasyBuild easyconfigs repository, that modifies both configure and config/macros.m4 to make binutils detection more robust. This patch was also provided to Extrae developers to incorporate into future releases.

The next step was to submit a Pull Request to the EasyBuild easyblocks repository with some modifications to the extrae.py easyblock that:

With all of this in place, we managed to correctly build Extrae but found out that many tests failed to pass, including all 21 under the MPI directory. We reported this fact to Extrae developers who answered that there was a critical bug fix related to MPI tracing in version 4.1.7 so we switched to that version before continuing our work.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#work-with-version-417","title":"Work with version 4.1.7","text":"

We tested the build of that version (of course including all the work done before for previous versions) and we still saw some errors in the make checkphase. We focused first in the following 3:

Regarding the first one, we found a bug in the Extrae test itself: the mpi_comm_ranksize_f_1proc.sh was invoking trace-ldpreload.sh instead of the Fortran version trace-ldpreloadf.sh and this caused the test to fail. We submitted a Pull Request to the Extrae repository with the bugfix that has been already merged and incorporated into new releases.

Regarding the second one, it was reported to Extrae developers as an issue. They suggested commenting out a call at src/tracer/wrappers/pthread/pthread_wrapper.c at line 240: //Backend_Flush_pThread (pthread_self());. We reported that this fixed the issue so this change has also been incorporated into the Extrae main branch for future releases.

The last failing test was an issue related with the access to HW counters on the building/testing system. The problem was that the test assumed that Extrae (through PAPI) can access HW counters (in this case, PAPI_TOT_INS). This might not be the case because this is very system-dependent (since it involves permissions, etc). As a solution, we committed a patch to the Extrae repository which ensured that the test will not fail if PAPI_TOT_CYC is unavailable in the testing system. As this has not been incorporated yet into the Extrae repository, we also committed a patch file to the EasyBuild easyconfigs repository that solves the problem with this specific test but also with others that suffered from this same issue.

"},{"location":"blog/2024/07/26/extrae-now-available-in-EESSI/#finally-version-420","title":"Finally, version 4.2.0","text":"

Due to the bugfixes mentioned in previous section that were incorporated into the Extrae repository, we switched again to an updated version of Extrae (in this case v4.2.0). With that updated version and the easyconfig (and patches) and easyblock modifications tests started to pass successfully in most of the testing platforms.

We noticed, however, that Extrae produced segmentation faults when using libunwind in ARM architectures. Our approach to that was to report the issue to Extrae developers and to make this dependency architecture specific (i.e. forcing --without-unwind when building for ARM while keeping the dependency for the rest of architectures). We did this in a Pull Request to the EasyBuild easyconfigs repository that is already merged. In this same Pull Request we added zlib as an explicit dependency in the easyconfig file for all architectures.

The last issue we encountered was similar to the previous one but in this case was seen on some RISC-V platforms and related to dynamic memory instrumentation. We adopted the same approach and reported the issue to Extrae developers and added --disable-instrument-dynamic-memory to the configure options in a Pull Request already merged into the EasyBuild-Easyconfigs repository.

With that, all tests passed in all platforms and we were able to incorporate Extrae to the list of software available in both the software.eessi.io and riscv.eessi.io repositories of EESSI.

"},{"location":"blog/2024/09/20/hpcwire-readers-choice-awards-2024/","title":"EESSI nominated for HPCwire Readers\u2019 Choice Awards 2024","text":"

EESSI has been nominated for the HPCwire Readers\u2019 Choice Awards 2024, in the \"Best HPC Programming Tool or Technology\" category.

You can help us win the award by joining the vote.

To vote, you should:

  1. Fill out and submit the form to register yourself as an HPCWire reader and access your ballot;
  2. Access your ballot here;
  3. Select your favorite in one or more categories;
  4. Submit your vote by filling in your name, organisation, and email address (to avoid ballot stuffing), and hitting the Done button.

Note that you are not required to vote for all categories, you can opt for only voting for one particular nominee in only one of the categories.

For example, you could vote for European Environment for Scientific Software Installations (EESSI) in category 13: Best HPC Programming Tool or Technology.

"},{"location":"blog/2024/10/11/ci-workflow-for-EESSI/","title":"An example CI workflow that leverages EESSI CI tools","text":"

EESSI's CI workflows are available on GitHub Actions and as a GitLab CI/CD component. Enabling this is as simple as adding EESSI's CI to your workflow of choice, giving you access to the entire EESSI software stack optimized for the relevant CPU architecture(s) in your runner's environment. If you are developing an application on top of the EESSI software stack, for example, this means you don't need to invest heavily in configuring and maintaining a CI setup: EESSI does that for you so you can focus on your code. With the EESSI CI workflows you don't have to worry about figuring out how to optimize build and runtime dependencies as these will be streamed seamlessly to your runner's environment.

"},{"location":"blog/2024/10/11/ci-workflow-for-EESSI/#using-the-ci-component-in-gitlab","title":"Using the CI component in GitLab","text":"

To showcase this, let's create a simple R package that just outputs a map of the European Union and Norway, and colours the participating countries in the MultiXscale CoE.

We'll make a package eessirmaps that relies on popular R packages ggplot2, sf, and rnaturalearth to render and save this map. Installing GIS tools for R can be somewhat cumbersome, which could become trickier if it has to be done in a CI environment. This is because sf requires system packages libgdal-dev and libproj-dev, which would add yet another step, complicating our CI workflow. Thankfully, EESSI makes a lot of the packages dependencies available to us from the start, as well as a fully functioning version of R, and the necessary package dependencies to boot! As far as setup goes, this results in a simple CI workflow:

include:\n  - component: $CI_SERVER_FQDN/eessi/gitlab-eessi/eessi@1.0.5\n\nbuild:\n  stage: build\n  artifacts:\n    paths:\n      - msx_map.png\n  script:\n    # Create directory for personal R library\n    - mkdir $CI_BUILDS_DIR/R\n    - export R_LIBS_USER=$CI_BUILDS_DIR/R\n    # Load the R module from EESSI\n    - module load R-bundle-CRAN/2023.12-foss-2023a\n    # Install eessirmaps, the rnaturalearth dep and create the plot\n    - R -e \"install.packages('rnaturalearthdata', repos = 'https://cran.rstudio.com/');\n      remotes::install_gitlab('neves-p/eessirmaps', upgrade = FALSE);\n      eessirmaps::multixscale_map(); ggplot2::ggsave('msx_map.png', bg = 'white')\"\n

Note how we simply include the EESSI GitLab CI component and set up a blank directory for our user R libraries. Remember, because of EESSI, the environment that you develop in will be exactly the same as the one the CI is run in. Apart from the rnaturalearthdata R package, all the other dependencies are taken care of by the R-bundle-CRAN/2023.12-foss-2023a EESSI module. This is true for the system and R package dependencies.

Then we simply have to install our package to the CI environment and call the multixscale_map() function to produce the plot, which is uploaded as an artifact from the CI environment. We can then retrieve the artifact archive, unpack it and obtain the map.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/","title":"EuroHPC User Day (22-23 Oct 2024, Amsterdam)","text":"

We had a great time at the EuroHPC User Day 2024 in Amsterdam earlier this week.

Both MultiXscale and EESSI were strongly represented, and the work we have been doing was clearly being appreciated.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#visit-to-surf-snellius-at-amsterdam-science-park","title":"Visit to SURF & Snellius at Amsterdam Science Park","text":"

Most of us arrived in the afternoon the day before the event, which gave us the chance to visit SURF on-site.

We had a short meeting there with the local team about how we could leverage Snellius, the Dutch national supercomputer, for building and testing software installations for EESSI.

We also got to visit the commercial datacenter at the Amsterdam Science Park (which will soon also host a European quantum computer!) and see Snellius up close, where we took a nice selfie.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#presentation-on-multixscale-and-eessi","title":"Presentation on MultiXscale and EESSI","text":"

After the very interesting first EuroHPC User Day in Brussels in December 2023, where MultiXscale and EESSI were mentioned as \"being well-aligned with the vision of EuroHPC JU\", we wanted to have a stronger presence at the second EuroHPC User Day in Amsterdam.

We submitted a paper entitled \"Portable test run of ESPResSo on EuroHPC systems via EESSI\" which was based on an earlier blog post we did in June 2024. Our submission was accepted, and hence the paper will be included in the upcoming proceedings of the 2nd EuroHPC User Day.

As a result, we were invited to present MultiXscale and more specifically the EESSI side of the project during one of the parallel sessions: HPC ecosystem tools. The slides of this presentation are available here.

During the Q&A after our talk various attendees asked interesting questions about specific aspects of EESSI, including:

Some attendees also provided some nice feedback on their initial experience with EESSI:

Quote by one of the attendees of the MultiXscale talk

It's very easy to install and configure CernVM-FS to provide access to EESSI based on the available documentation.

Any sysadmin can do it: it took me half a day, and that was mostly due to my own stupidity.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#mentioning-of-multixscale-and-eessi-by-other-speakers","title":"Mentioning of MultiXscale and EESSI by other speakers","text":"

It was remarkable and satisfying to see that MultiXscale and EESSI were being mentioned several times through the event, often by people and organisations who are not actively involved with either project. Clearly the word is starting to spread on the work we are doing!

Valeriu Codreanu (head of High-Performance Computing and Visualization at SURF) had some nice comments to share during his opening statement of the event about their involvement in MultiXscale and EESSI, and why a well-designed shared stack of optimized software installations is really necessary.

When an attendee of one of the plenary sessions raised a question on a lack of a uniform software stack across EuroHPC systems, Lilit Axner (Programme Manager Infrastructure at EuroHPC JU) answered that a federated platform for EuroHPC systems is currently in the works, and that more news will be shared soon on this.

In the short presentation on the EuroHPC JU system Vega we got explicitly mentioned again, alongside CernVM-FS and EasyBuild which are both used in the EESSI project.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#live-demo-of-eessi-at-walk-in-networking-session","title":"Live demo of EESSI at walk-in networking session","text":"

On Wednesday, the MultiXscale project was part of the walk-in networking session Application Support, Training and Skills.

During this session we were running a live demonstration of a small Plane Poiseuille flow simulation with ESPResSo.

The software was being provided via EESSI, and we were running the simulation on various hardware platforms, including:

Attendees could participate in a contest to win a Raspberry Pi 5 starter kit by filling out a form and answering a couple of questions related to MultiXscale.

At the end of the session we did a random draw among the participants who answered the questions correctly, and Giorgos Kosta (CaSToRC - The Cyprus Institute) came out as the lucky winner!

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#eurohpc-user-forum","title":"EuroHPC User Forum","text":"

Last but not least, the EuroHPC User Forum was being presented during a plenary session.

Attendees were invited to connect with the EuroHPC User Forum representatives and each other via the dedicated Slack that has been created for it.

Lara Peeters, who is also active in MultiXscale EuroHPC Centre-of-Excellence, is part of the EuroHPC User Forum, representing Digital Humanities.

"},{"location":"blog/2024/10/25/eurohpc_user_day_2024/#eurohpc-user-day-2025-in-denmark","title":"EuroHPC User Day 2025 in Denmark","text":"

We are already looking forward to engaging with the EuroHPC user community next year in Denmark!

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/","title":"EESSI won an HPCWire Reader's Choice Award!","text":"

We are thrilled to announce that EESSI has won an HPCWire Reader's Choice Award!

EESSI received the most votes from the HPC community in the \"Best HPC Programming Tool or Technology\" category, despite the fierce competition of others projects that got nominated in this category.

This news was revealed at the Supercomputing 2024 (SC'24) conference in Atlanta (US).

Thank you very much if you voted for us!

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#award-ceremony","title":"Award ceremony","text":"

A modest award ceremony was held at the Do IT Now booth on the SC'24 exhibit floor, since HPCNow! (part of the Do IT Now Group) is a partner in the MultiXscale EuroHPC Centre-of-Excellence.

The handover of the award plaque was done by Tom Tabor, CEO of Tabor Communications, Inc., the publisher of HPCWire.

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#picture-at-eurohpc-ju-booth","title":"Picture at EuroHPC JU booth","text":"

It is important to highlight that the funding provided by the EuroHPC JU to the MultiXscale Centre-of-Excellence has been a huge catalyst in the last couple of years for EESSI, which forms the technical pillar of MultiXscale.

Anders Dam Jensen, CEO of EuroHPC JU, and Daniel Opalka, head of Research & Innovation at EuroHPC JU, were more than happy to take a commemorative picture at the EuroHPC JU booth, together with representatives of some of the MultiXscale partners (Ghent University, HPCNow!, and SURF).

"},{"location":"blog/2024/11/18/hpcwire-readers-choice-awards-2024-for-eessi/#more-info","title":"More info","text":"

For more information about EESSI, check out our website: https://eessi.io.

"},{"location":"filesystem_layer/stratum1/","title":"Setting up a Stratum 1","text":"

The EESSI project provides a number of geographically distributed public Stratum 1 servers that you can use to make EESSI available on your machine(s). It is always recommended to have a local caching layer consisting of a few Squid proxies. If you want to be even better protected against network outages and increase the bandwidth between your cluster nodes and the Stratum 1 servers, you could also consider setting up a local (private) Stratum 1 server that replicates the EESSI CVMFS repository. This guarantees that you always have a full and up-to-date copy of the entire stack available in your local network.

"},{"location":"filesystem_layer/stratum1/#requirements-for-a-stratum-1","title":"Requirements for a Stratum 1","text":"

The main requirements for a Stratum 1 server are a good network connection to the clients it is going to serve, and sufficient disk space. As the EESSI repository is constantly growing, make sure that the disk space can easily be extended if necessary. Currently, we recommend to have at least 1 TB available.

In terms of cores and memory, a machine with just a few (~4) cores and 4-8 GB of memory should suffice.

Various Linux distributions are supported, but we recommend one based on RHEL 8 or 9.

Finally, make sure that ports 80 and 8000 are open to clients.

"},{"location":"filesystem_layer/stratum1/#configure-the-stratum-1","title":"Configure the Stratum 1","text":"

Stratum 1 servers have to synchronize the contents of their CVMFS repositories regularly, and usually they replicate from a CVMFS Stratum 0 server. In order to ensure the stability and security of the EESSI Stratum 0 server, it has a strict firewall, and only the EESSI-maintained public Stratum 1 servers are allowed to replicate from it. However, EESSI provides a synchronisation server that can be used for setting up private Stratum 1 replica servers, and this is available at http://aws-eu-west-s1-sync.eessi.science.

Warn

In the past we have seen a few occurrences of data transfer issues when files were being pulled in by or from a Stratum 1 server. In such cases the cvmfs_server snapshot command, used for synchronizing the Stratum 1, may break with errors like failed to download <URL to file>. Trying to manually download the mentioned file with curl will also not work, and result in errors like:

curl: (56) Recv failure: Connection reset by peer\n
In all cases this was due to an intrusion prevention system scanning the associated network, and hence scanning all files going in or out of the Stratum 1. Though it was a false-positive in all cases, this breaks the synchronization procedure of your Stratum 1. If this is the case, you can try switching to HTTPS by using https://aws-eu-west-s1-sync.eessi.science for synchronizing your Stratum 1. Even though there is no advantage for CVMFS itself in using HTTPS (it has built-in mechanisms for ensuring the integrity of the data), this will prevent the described issues, as the intrusion prevention system will not be able to inspect the encrypted data. However, not only does HTTPS introduce some overhead due to the encryption/decryption, it also makes caching in forward proxies impossible. Therefore, it is strongly discouraged to use HTTPS as default.

"},{"location":"filesystem_layer/stratum1/#manual-configuration","title":"Manual configuration","text":"

In order to set up a Stratum 1 manually, you can make use of the instructions in the Private Stratum 1 replica server section of the MultiXscale tutorial \"Best Practices for CernVM-FS in HPC\".

"},{"location":"filesystem_layer/stratum1/#configuration-using-ansible","title":"Configuration using Ansible","text":"

The recommended way for setting up an EESSI Stratum 1 is by running the Ansible playbook stratum1.yml from the filesystem-layer repository on GitHub. For the commands in this section, we are assuming that you cloned this repository, and your working directory is filesystem-layer.

Note

Installing a Stratum 1 usually requires a GEO API license key, which will be used to find the (geographically) closest Stratum 1 server for your client and proxies. However, for a private Stratum 1 this can be skipped, and you can disable the use of the GEO API in the configuration of your clients by setting CVMFS_USE_GEOAPI=no. In this case, they will just connect to your local Stratum 1 by default.

If you do want to set up the GEO API, you can find more information on how to (freely) obtain this key in the CVMFS documentation: https://cvmfs.readthedocs.io/en/stable/cpt-replica.html#geo-api-setup.

You can put your license key in the local configuration file inventory/local_site_specific_vars.yml with the variables cvmfs_geo_license_key and cvmfs_geo_account_id.

Start by installing Ansible, e.g.:

sudo yum install -y ansible\n

Then install Ansible roles for EESSI:

ansible-galaxy role install -r ./requirements.yml --force\n

Make sure you have enough space in /srv on the Stratum 1, since the snapshots of the repositories will end up there by default. To alter the directory where the snapshots get stored you can manually create a symlink before running the playbook:

sudo ln -s /lots/of/space/cvmfs /srv/cvmfs\n

Also make sure that: - you are able to log in to the server from the machine that is going to run the playbook (preferably using an SSH key); - you can use sudo on this machine; - you add the hostname or IP address of your server to a cvmfsstratum1servers section in the inventory/hosts file, e.g.:

[cvmfsstratum1servers]\n12.34.56.789 ansible_ssh_user=yourusername\n

Finally, install the Stratum 1 using:

# -b to run as root, optionally use -K if a sudo password is required, and optionally include your site-specific variables\nansible-playbook -b [-K] [-e @inventory/local_site_specific_vars.yml] stratum1.yml\n
Running the playbook will automatically make replicas of all the EESSI repositories defined in inventory/group_vars/all.yml. If you only want to replicate the main software repository (software.eessi.io), you can remove the other ones from the eessi_cvmfs_repositories list in this file.

"},{"location":"filesystem_layer/stratum1/#verification-of-the-stratum-1-using-curl","title":"Verification of the Stratum 1 using curl","text":"

When the playbook has finished, your Stratum 1 should be ready. In order to test your Stratum 1, even without a client installed, you can use curl:

curl --head http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io/.cvmfspublished\n
This should return something like:

HTTP/1.1 200 OK\n...\nContent-Type: application/x-cvmfs\n

Example with the EESSI Stratum 1 running in AWS:

curl --head http://aws-eu-central-s1.eessi.science/cvmfs/software.eessi.io/.cvmfspublished\n
"},{"location":"filesystem_layer/stratum1/#verification-of-the-stratum-1-using-a-cvmfs-client","title":"Verification of the Stratum 1 using a CVMFS client","text":"

You can, of course, also test access to your Stratum 1 from a client. This requires you to install a CernVM-FS client and add the Stratum 1 to the client configuration; this is explained in more detail on the native installation page.

Then verify that the client connects to your new Stratum 1 by running:

cvmfs_config stat -v software.eessi.io\n

Assuming that your new Stratum 1 is working properly, this should return something like:

Connection: http://<url-or-ip-to-your-stratum1>/cvmfs/software.eessi.io through proxy DIRECT (online)\n
"},{"location":"getting_access/eessi_container/","title":"EESSI container script","text":"

The eessi_container.sh script provides a very easy yet versatile means to access EESSI. It is the preferred method to start an EESSI container as it has support for many different scenarios via various options.

This page guides you through several example scenarios illustrating the use of the script.

"},{"location":"getting_access/eessi_container/#prerequisites","title":"Prerequisites","text":""},{"location":"getting_access/eessi_container/#preparation","title":"Preparation","text":"

Clone the EESSI/software-layer repository and change into the software-layer directory by running these commands:

git clone https://github.com/EESSI/software-layer.git\ncd software-layer\n
"},{"location":"getting_access/eessi_container/#quickstart","title":"Quickstart","text":"

Run the eessi_container script (from the software-layer directory) to start a shell session in the EESSI container:

./eessi_container.sh\n

Note

Startup will take a bit longer the first time you run this because the container image is downloaded and converted.

You should see output like

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell  --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nApptainer> CernVM-FS: loading Fuse module... done\nCernVM-FS: loading Fuse module... done\n\nApptainer>\n

Note

You may have to press enter to clearly see the prompt as some messages beginning with CernVM-FS: have been printed after the first prompt Apptainer> was shown.

To start using EESSI, see Using EESSI/Setting up your environment.

"},{"location":"getting_access/eessi_container/#help-for-eessi_containersh","title":"Help for eessi_container.sh","text":"

The example in the Quickstart section facilitates an interactive session with read access to the EESSI software stack. It does not require any command line options, because the script eessi_container.sh uses some carefully chosen defaults. To view all options of the script and its default values, run the command

./eessi_container.sh --help\n
You should see the following output
usage: ./eessi_container.sh [OPTIONS] [[--] SCRIPT or COMMAND]\n OPTIONS:\n  -a | --access {ro,rw}  - ro (read-only), rw (read & write) [default: ro]\n  -c | --container IMG   - image file or URL defining the container to use\n                           [default: docker://ghcr.io/eessi/build-node:debian11]\n  -g | --storage DIR     - directory space on host machine (used for\n                           temporary data) [default: 1. TMPDIR, 2. /tmp]\n  -h | --help            - display this usage information [default: false]\n  -i | --host-injections - directory to link to for host_injections \n                           [default: /..storage../opt-eessi]\n  -l | --list-repos      - list available repository identifiers [default: false]\n  -m | --mode MODE       - with MODE==shell (launch interactive shell) or\n                           MODE==run (run a script or command) [default: shell]\n  -n | --nvidia MODE     - configure the container to work with NVIDIA GPUs,\n                           MODE==install for a CUDA installation, MODE==run to\n                           attach a GPU, MODE==all for both [default: false]\n  -r | --repository CFG  - configuration file or identifier defining the\n                           repository to use [default: EESSI via\n                           container configuration]\n  -u | --resume DIR/TGZ  - resume a previous run from a directory or tarball,\n                           where DIR points to a previously used tmp directory\n                           (check for output 'Using DIR as tmp ...' of a previous\n                           run) and TGZ is the path to a tarball which is\n                           unpacked the tmp dir stored on the local storage space\n                           (see option --storage above) [default: not set]\n  -s | --save DIR/TGZ    - save contents of tmp directory to a tarball in\n                           directory DIR or provided with the fixed full path TGZ\n                           when a directory is provided, the format of the\n                           tarball's name will be {REPO_ID}-{TIMESTAMP}.tgz\n                           [default: not set]\n  -v | --verbose         - display more information [default: false]\n  -x | --http-proxy URL  - provides URL for the env variable http_proxy\n                           [default: not set]; uses env var $http_proxy if set\n  -y | --https-proxy URL - provides URL for the env variable https_proxy\n                           [default: not set]; uses env var $https_proxy if set\n\n If value for --mode is 'run', the SCRIPT/COMMAND provided is executed. If\n arguments to the script/command start with '-' or '--', use the flag terminator\n '--' to let eessi_container.sh stop parsing arguments.\n

So, the defaults are equal to running the command

./eessi_container.sh --access ro --container docker://ghcr.io/eessi/build-node:debian11 --mode shell --repository EESSI\n
and it would either create a temporary directory under ${TMPDIR} (if defined), or /tmp (if ${TMPDIR} is not defined).

The remainder of this page will demonstrate different scenarios using some of the command line options used for read-only access.

Other options supported by the script will be discussed in a yet-to-be written section covering building software to be added to the EESSI stack.

"},{"location":"getting_access/eessi_container/#resuming-a-previous-session","title":"Resuming a previous session","text":"

You may have noted the following line in the output of eessi_container.sh

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).\n

Note

The parameter after --resume (/tmp/eessi.abc123defg) will be different when you run eessi_container.sh.

Scroll back in your terminal and copy it so you can pass it to --resume.

Try the following command to \"resume\" from the last session.

./eessi_container.sh --resume /tmp/eessi.abc123defg\n
This should run much faster because the container image has been cached in the temporary directory (/tmp/eessi.abc123defg). You should get to the prompt (Apptainer> or Singularity>) and can use EESSI with the state where you left the previous session.

Note

The state refers to what was stored on disk, not what was changed in memory. Particularly, any environment (variable) settings are not restored automatically.

Because the /tmp/eessi.abc123defg directory contains a home directory which includes the saved history of your last session, you can easily restore the environment (variable) settings. Type history to see which commands you ran. You should be able to access the history as you would do in a normal terminal session.

"},{"location":"getting_access/eessi_container/#running-a-simple-command","title":"Running a simple command","text":"

Let's \"ls /cvmfs/software.eessi.io\" through the eessi_container.sh script to check if the CernVM-FS EESSI repository is accessible:

./eessi_container.sh --mode run ls /cvmfs/software.eessi.io\n

You should see an output such as

Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell  --fusemount container:cvmfs2 cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.ymYGaZwoWC/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nhost_injections  latest  versions\n

Note that this time no interactive shell session is started in the container: only the provided command is run in the container, and when that finishes you are back in the shell session where you ran the eessi_container.sh script.

This is because we used the --mode run command line option.

Note

The last line in the output is the output of the ls command, which shows the contents of the /cvmfs/software.eessi.io directory.

Also, note that there is no shell prompt (Apptainer> or Singularity), since no interactive shell session is started in the container.

Alternatively to specify the command as we did above, you can also do the following.

CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh --mode shell <<< ${CMD}\n

Note

We changed the mode from run to shell because we use a different method to let the script run our command, by feeding it in via the stdin input channel using <<<.

Because shell is the default value for --mode we can also omit this and simply run

CMD=\"ls -l /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n

"},{"location":"getting_access/eessi_container/#running-a-script","title":"Running a script","text":"

While running simple command can be sufficient in some cases, you often want to run scripts containing multiple commands.

Let's run the script shown below.

First, copy-paste the contents for the script shown below, and create a file named eessi_architectures.sh in your current directory. Also make the script executable, by running:

chmod +x eessi_architectures.sh\n

Here are the contents for the eessi_architectures.sh script:

#!/usr/bin/env bash\n#\n# This script determines which architectures are included in the\n# latest EESSI version. It makes use of the specific directory\n# structure in the EESSI repository.\n#\n\n# determine list of available OS types\nBASE=${EESSI_CVMFS_REPO:-/cvmfs/software.eessi.io}/latest/software\ncd ${BASE}\nfor os_type in $(ls -d *)\ndo\n    # determine architecture families\n    OS_BASE=${BASE}/${os_type}\n    cd ${OS_BASE}\n    for arch_family in $(ls -d *)\n    do\n        # determine CPU microarchitectures\n        OS_ARCH_BASE=${BASE}/${os_type}/${arch_family}\n        cd ${OS_ARCH_BASE}\n        for microarch in $(ls -d *)\n        do\n            case ${microarch} in\n                amd | intel )\n                    for sub in $(ls ${microarch})\n                    do\n                        echo \"${os_type}/${arch_family}/${microarch}/${sub}\"\n                    done\n                    ;;\n                * )\n                    echo \"${os_type}/${arch_family}/${microarch}\"\n                    ;;\n            esac\n        done\n    done\ndone\n
Run the script as follows
./eessi_container.sh --mode shell < eessi_architectures.sh\n
The output should be similar to
Using /tmp/eessi.abc123defg as tmp storage (add '--resume /tmp/eessi.abc123defg' to resume where this session ended).$\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nlinux/aarch64/generic\nlinux/aarch64/graviton2\nlinux/aarch64/graviton3\nlinux/ppc64le/generic\nlinux/ppc64le/power9le\nlinux/x86_64/amd/zen2\nlinux/x86_64/amd/zen3\nlinux/x86_64/generic\nlinux/x86_64/intel/haswell\nlinux/x86_64/intel/skylake_avx512\n
Lines 6 to 15 show the output of the script eessi_architectures.sh.

If you want to use the mode run, you have to make the script's location available inside the container.

This can be done by mapping the current directory (${PWD}), which contains eessi_architectures.sh, to any not-yet existing directory inside the container using the $SINGULARITY_BIND or $APPTAINER_BIND environment variable.

For example:

SINGULARITY_BIND=${PWD}:/scripts ./eessi_container.sh --mode run /scripts/eessi_architectures.sh\n

"},{"location":"getting_access/eessi_container/#running-scripts-or-commands-with-parameters-starting-with-or-","title":"Running scripts or commands with parameters starting with - or --","text":"

Let's assume we would like to get more information about the entries of /cvmfs/software.eessi.io. If we would just run

./eessi_container.sh --mode run ls -lH /cvmfs/software.eessi.io\n
we would get an error message such as
ERROR: Unknown option: -lH\n
We can resolve this in two ways:

  1. Using the stdin channel as described above, for example, by simply running
    CMD=\"ls -lH /cvmfs/software.eessi.io\"\n./eessi_container.sh <<< ${CMD}\n
    which should result in the output similar to
    Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user   10 Jun 30  2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user   16 May  4  2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10  2021 versions\n
  2. Using the flag terminator -- which tells eessi_container.sh to stop parsing command line arguments. For example,
    ./eessi_container.sh --mode run -- ls -lH /cvmfs/software.eessi.io\n
    which should result in the output similar to
    Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q run --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif ls -lH /cvmfs/software.eessi.io\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\ntotal 10\nlrwxrwxrwx 1 user user   10 Jun 30  2021 host_injections -> /opt/eessi\nlrwxrwxrwx 1 user user   16 May  4  2022 latest -> versions/2021.12\ndrwxr-xr-x 3 user user 4096 Dec 10  2021 versions\n
"},{"location":"getting_access/eessi_container/#running-eessi-demos","title":"Running EESSI demos","text":"

For examples of scripts that use the software provided by EESSI, see Running EESSI demos.

"},{"location":"getting_access/eessi_container/#launching-containers-more-quickly","title":"Launching containers more quickly","text":"

Subsequent runs of eessi_container.sh may reuse temporary data of a previous session, which includes the pulled image of the container. However, that is not always what we want, i.e., reusing a previous session (and thereby launching the container more quickly).

The eessi_container.sh script may (re)-use a cache directory provided via $SINGULARITY_CACHEDIR (or $APPTAINER_CACHEDIR when using Apptainer). Hence, the container image does not have to be downloaded again even when starting a new session. The example below illustrates this.

export SINGULARITY_CACHEDIR=${PWD}/container_cache_dir\ntime ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
which should produce output similar to
Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections  latest  versions\n\nreal    m40.445s\nuser    3m2.621s\nsys     0m7.402s\n
The next run using the same cache directory, e.g., by simply executing
time ./eessi_container.sh <<< \"ls /cvmfs/software.eessi.io\"\n
is much faster
Using /tmp/eessi.abc123defg as tmp directory (to resume session add '--resume /tmp/eessi.abc123defg').\nPulling container image from docker://ghcr.io/eessi/build-node:debian11 to /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nLaunching container with command (next line):\nsingularity -q shell --fusemount container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io /tmp/eessi.abc123defg/ghcr.io_eessi_build_node_debian11.sif\nCernVM-FS: pre-mounted on file descriptor 3\nCernVM-FS: loading Fuse module... done\nfuse: failed to clone device fd: Inappropriate ioctl for device\nfuse: trying to continue without -o clone_fd.\nhost_injections  latest  versions\n\nreal    0m2.781s\nuser    0m0.172s\nsys     0m0.436s\n

Note

Each run of eessi_container.sh (without specifying --resume) creates a new temporary directory. The temporary directory stores, among other data, the image file of the container. Thus we can ensure that the container is available locally for a subsequent run.

However, this may quickly consume scarce resources, for example, a small partition where /tmp is located (default for temporary storage, see --help for specifying a different location).

See next section for making sure to clean up no longer needed temporary data.

"},{"location":"getting_access/eessi_container/#reducing-disk-usage","title":"Reducing disk usage","text":"

By default eessi_container.sh creates a temporary directory under /tmp. The directories are named eessi.RANDOM where RANDOM is a 10-character string. The script does not automatically remove these directories. To determine their total disk usage, simply run

du -sch /tmp/eessi.*\n
which could result in output similar to
333M    /tmp/eessi.session123\n333M    /tmp/eessi.session456\n333M    /tmp/eessi.session789\n997M    total\n
Clean up disk usage by simply removing directories you do not need any longer.

"},{"location":"getting_access/eessi_container/#eessi-container-image","title":"EESSI container image","text":"

If you would like to directly use an EESSI container image, you can do so by configuring apptainer to correctly mount the CVMFS repository:

# honor $TMPDIR if it is already defined, use /tmp otherwise\nif [ -z $TMPDIR ]; then\n    export WORKDIR=/tmp/$USER\nelse\n    export WORKDIR=$TMPDIR/$USER\nfi\n\nmkdir -p ${WORKDIR}/{var-lib-cvmfs,var-run-cvmfs,home}\nexport SINGULARITY_BIND=\"${WORKDIR}/var-run-cvmfs:/var/run/cvmfs,${WORKDIR}/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"${WORKDIR}/home:/home/$USER\"\nexport EESSI_REPO=\"container:cvmfs2 software.eessi.io /cvmfs/software.eessi.io\"\nexport EESSI_CONTAINER=\"docker://ghcr.io/eessi/client:centos7\"\nsingularity shell --fusemount \"$EESSI_REPO\" \"$EESSI_CONTAINER\"\n
"},{"location":"getting_access/eessi_limactl/","title":"Installing EESSI with Lima on MacOS","text":""},{"location":"getting_access/eessi_limactl/#installation-of-lima","title":"Installation of Lima","text":"

See Lima documentation: https://lima-vm.io/docs/installation/

brew install lima\n
"},{"location":"getting_access/eessi_limactl/#installing-eessi-in-limactl-with-eessi-template","title":"Installing EESSI in limactl with EESSI template","text":""},{"location":"getting_access/eessi_limactl/#example-eessiyaml-file","title":"Example eessi.yaml file","text":"

Use the EESSI template to install a virtual machine with eessi installed. Create a eessi.yaml file

Install a virtual machine with a Debian imageInstall a virtual machine with an Ubuntu imageInstall a virtual machine with a Rocky 9 image
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n# Try to use release-yyyyMMdd image if available. Note that release-yyyyMMdd will be removed after several months.\n- location: \"https://cloud.debian.org/images/cloud/bookworm/20240429-1732/debian-12-genericcloud-amd64-20240429-1732.qcow2\"\n  arch: \"x86_64\"\n  digest: \"sha512:6cc752d71b390c7fea64b0b598225914a7f4adacd4a33fa366187fac01094648628e0681a109ae9320b9a79aba2832f33395fa13154dad636465b7d9cdbed599\"\n- location: \"https://cloud.debian.org/images/cloud/bookworm/20240429-1732/debian-12-genericcloud-arm64-20240429-1732.qcow2\"\n  arch: \"aarch64\"\n  digest: \"sha512:59afc40ad0062ca100c9280a281256487348c8aa23b3e70c329a6d6f29b5343b628622e63e0b9b4fc3987dd691d5f3c657233186b3271878d5e0aa0b4d264b06\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2\"\n  arch: \"x86_64\"\n- location: \"https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-arm64.qcow2\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    wget -P /tmp https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\n    sudo dpkg -i /tmp/cvmfs-release-latest_all.deb\n    rm -f /tmp/cvmfs-release-latest_all.deb\n    sudo apt-get update\n    sudo apt-get install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n  hint: See \"/var/log/cloud-init-output.log\" in the guest\n
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n# Try to use release-yyyyMMdd image if available. Note that release-yyyyMMdd will be removed after several months.\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release-20240514/ubuntu-22.04-server-cloudimg-amd64.img\"\n  arch: \"x86_64\"\n  digest: \"sha256:1718f177dde4c461148ab7dcbdcf2f410c1f5daa694567f6a8bbb239d864b525\"\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release-20240514/ubuntu-22.04-server-cloudimg-arm64.img\"\n  arch: \"aarch64\"\n  digest: \"sha256:f6bf7305207a2adb9a2e2f701dc71f5747e5ba88f7b67cdb44b3f5fa6eea94a3\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img\"\n  arch: \"x86_64\"\n- location: \"https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-arm64.img\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    wget -P /tmp https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\n    sudo dpkg -i /tmp/cvmfs-release-latest_all.deb\n    rm -f /tmp/cvmfs-release-latest_all.deb\n    sudo apt-get update\n    sudo apt-get install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n   hint: See \"/var/log/cloud-init-output.log\" in the guest\n
# A template to use the EESSI software stack (see https://eessi.io) on macOS\n# $ limactl start ./eessi.yaml\n# $ limactl shell eessi\n\nimages:\n- location: \"https://dl.rockylinux.org/pub/rocky/9.3/images/x86_64/Rocky-9-GenericCloud-Base-9.3-20231113.0.x86_64.qcow2\"\n  arch: \"x86_64\"\n  digest: \"sha256:7713278c37f29b0341b0a841ca3ec5c3724df86b4d97e7ee4a2a85def9b2e651\"\n- location: \"https://dl.rockylinux.org/pub/rocky/9.3/images/aarch64/Rocky-9-GenericCloud-Base-9.3-20231113.0.aarch64.qcow2\"\n  arch: \"aarch64\"\n  digest: \"sha256:1948a5e00786dbf3230335339cf96491659e17444f5d00dabac0f095a7354cc1\"\n# Fallback to the latest release image.\n# Hint: run `limactl prune` to invalidate the cache\n- location: \"https://dl.rockylinux.org/pub/rocky/9/images/x86_64/Rocky-9-GenericCloud.latest.x86_64.qcow2\"\n  arch: \"x86_64\"\n- location: \"https://dl.rockylinux.org/pub/rocky/9/images/aarch64/Rocky-9-GenericCloud.latest.aarch64.qcow2\"\n  arch: \"aarch64\"\n\nmounts:\n- location: \"~\"\n- location: \"/tmp/lima\"\n  writable: true\ncontainerd:\n  system: false\n  user: false\nprovision:\n- mode: system\n  script: |\n    #!/bin/bash\n    sudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\n    sudo yum install -y cvmfs\n    if [ ! -f /etc/cvmfs/default.local ]; then\n        sudo echo \"CVMFS_HTTP_PROXY=DIRECT\" >> /etc/cvmfs/default.local\n        sudo echo \"CVMFS_QUOTA_LIMIT=10000\" >> /etc/cvmfs/default.local\n    fi\n    sudo cvmfs_config setup\nprobes:\n- script: |\n    #!/bin/bash\n    set -eux -o pipefail\n    if ! timeout 30s bash -c \"until ls /cvmfs/software.eessi.io >/dev/null 2>&1; do sleep 3; done\"; then\n      echo >&2 \"EESSI repository is not available yet\"\n      exit 1\n    fi\n  hint: See \"/var/log/cloud-init-output.log\" in the guest\n
"},{"location":"getting_access/eessi_limactl/#create-the-virtual-machine-with-the-eessiyaml-file","title":"Create the virtual machine with the eessi.yaml file","text":"
limactl create --name eessi ./eessi.yaml\n
"},{"location":"getting_access/eessi_limactl/#start-and-enter-the-virtual-machine","title":"Start and enter the virtual machine","text":"
limactl start eessi\nlimactl shell eessi\n

EESSI should now be available in the virtual machine

user@machine:/Users/user$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n  Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\n  archdetect says x86_64/intel/haswell\n  Using x86_64/intel/haswell as software subdirectory.\n  Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\n  Found Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/SitePackage.lua\n  Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\n  Using /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all as the site extension directory to be added to MODULEPATH.\n  Initializing Lmod...\n  Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Prepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Environment set up to use EESSI (2023.06), have fun!\n
"},{"location":"getting_access/eessi_limactl/#cleanup-virtual-machine","title":"Cleanup virtual machine","text":"
limactl stop eessi\nlimactl delete eessi\nlimactl prune\n
"},{"location":"getting_access/eessi_limactl/#advanced-set-resources-for-new-virtual-machine","title":"Advanced: Set resources for new virtual machine","text":"
# Set resources\nRATIO_RAM=0.5\nRAM=$(numfmt --to=none --to-unit=1073741824 --format=%.0f  $(echo $(sysctl hw.memsize_usable | awk '{print $2}' ) \"*$RATIO_RAM\" | bc -l))\nCPUS=$(sysctl hw.physicalcpu | awk '{print $2}')\n# Create VM\nlimactl create --cpus $CPUS --memory $RAM --name eessi ./eessi.yaml\nlimactl list\n
"},{"location":"getting_access/eessi_wsl/","title":"Installing EESSI with Windows Subsystem for Linux","text":""},{"location":"getting_access/eessi_wsl/#basic-commands-with-wsl","title":"Basic commands with WSL","text":""},{"location":"getting_access/eessi_wsl/#list-the-available-linux-distributions-for-installation","title":"List the available linux distributions for installation","text":"
C:/users/user>wsl --list --online\nThe following is a list of valid distributions that can be installed.\nInstall using 'wsl.exe --install <Distro>'.\n\nNAME                                   FRIENDLY NAME\nUbuntu                                 Ubuntu\nDebian                                 Debian GNU/Linux\nkali-linux                             Kali Linux Rolling\nUbuntu-18.04                           Ubuntu 18.04 LTS\nUbuntu-20.04                           Ubuntu 20.04 LTS\nUbuntu-22.04                           Ubuntu 22.04 LTS\nUbuntu-24.04                           Ubuntu 24.04 LTS\nOracleLinux_7_9                        Oracle Linux 7.9\nOracleLinux_8_7                        Oracle Linux 8.7\nOracleLinux_9_1                        Oracle Linux 9.1\nopenSUSE-Leap-15.5                     openSUSE Leap 15.5\nSUSE-Linux-Enterprise-Server-15-SP4    SUSE Linux Enterprise Server 15 SP4\nSUSE-Linux-Enterprise-15-SP5           SUSE Linux Enterprise 15 SP5\nopenSUSE-Tumbleweed                    openSUSE Tumbleweed\n
"},{"location":"getting_access/eessi_wsl/#list-the-installed-machines","title":"List the installed machines","text":"
C:/users/user>wsl --list --verbose\n  NAME      STATE           VERSION\n* Debian    Stopped         2\n
"},{"location":"getting_access/eessi_wsl/#reconnecting-to-a-virtual-machine-with-wsl","title":"Reconnecting to a Virtual machine with wsl","text":"
C:/users/user>wsl --distribution Debian\nuser@id:~$\n

For more documentation on using WSL you can check out the following pages:

"},{"location":"getting_access/eessi_wsl/#installing-a-linux-distribution-with-wsl","title":"Installing a linux distribution with WSL","text":"
C:/users/user>wsl --install --distribution Debian\nDebian GNU/Linux is already installed.\nLaunching Debian GNU/Linux...\nInstalling, this may take a few minutes...\nPlease create a default UNIX user account. The username does not need to match your Windows username.\nFor more information visit: https://aka.ms/wslusers\nEnter new UNIX username: user\nNew password:\nRetype new password:\npasswd: password updated successfully\nInstallation successful!\n
"},{"location":"getting_access/eessi_wsl/#installing-eessi-in-the-virtual-machine","title":"Installing EESSI in the virtual machine","text":"
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nsudo apt-get install wget\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
"},{"location":"getting_access/eessi_wsl/#start-cernvm-fs-in-windows-subsystem-for-linux","title":"Start cernVM-FS in Windows Subsystem for Linux","text":"

When the virtual machine is restarted CernVM-FS needs to be remounted with following command.

# start CernVM-FS on WSL\nsudo cvmfs_config wsl2_start\n

If you do not wish to do this you can set up the automounter. Examples are available here.

EESSI should now be available in the virtual machine

user@id:~$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n  Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\n  archdetect says x86_64/intel/haswell\n  Using x86_64/intel/haswell as software subdirectory.\n  Found Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\n  Found Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/.lmod/SitePackage.lua\n  Using /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\n  Using /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all as the site extension directory to be added to MODULEPATH.\n  Initializing Lmod...\n  Prepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Prepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\n  Environment set up to use EESSI (2023.06), have fun!\n
"},{"location":"getting_access/eessi_wsl/#cleanup-of-the-virtual-machine","title":"Cleanup of the virtual machine","text":"
C:/users/user>wsl --terminate Debian\nC:/users/user>wsl --unregister Debian\n
"},{"location":"getting_access/is_eessi_accessible/","title":"Is EESSI accessible?","text":"

EESSI can be accessed via a native (CernVM-FS) installation, or via a container that includes CernVM-FS.

Before you look into these options, check if EESSI is already accessible on your system.

Run the following command:

ls /cvmfs/software.eessi.io\n

Note

This ls command may take a couple of seconds to finish, since CernVM-FS may need to download or update the metadata for that directory.

If you see output like shown below, you already have access to EESSI on your system.

host_injections  latest  versions\n

For starting to use EESSI, continue reading about Setting up environment.

If you see an error message as shown below, EESSI is not yet accessible on your system.

ls: /cvmfs/software.eessi.io: No such file or directory\n
No worries, you don't need to be a to get access to EESSI.

Continue reading about the Native installation of EESSI, or access via the EESSI container.

"},{"location":"getting_access/native_installation/","title":"Native installation","text":""},{"location":"getting_access/native_installation/#installation-for-single-clients","title":"Installation for single clients","text":"

Setting up native access to EESSI, that is a system-wide deployment that does not require workarounds like using a container, requires the installation and configuration of CernVM-FS.

This requires admin privileges, since you need to install CernVM-FS as an OS package.

The following actions must be taken for a (basic) native installation of EESSI:

The good news is that all of this only requires a handful commands :

RHEL-based Linux distributionsDebian-based Linux distributions
# Installation commands for RHEL-based distros like CentOS, Rocky Linux, Almalinux, Fedora, ...\n\n# install CernVM-FS\nsudo yum install -y https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest.noarch.rpm\nsudo yum install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nsudo yum install -y https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi-latest.noarch.rpm\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n
# Installation commands for Debian-based distros like Ubuntu, ...\n\n# install CernVM-FS\nsudo apt-get install lsb-release\nwget https://ecsft.cern.ch/dist/cvmfs/cvmfs-release/cvmfs-release-latest_all.deb\nsudo dpkg -i cvmfs-release-latest_all.deb\nrm -f cvmfs-release-latest_all.deb\nsudo apt-get update\nsudo apt-get install -y cvmfs\n\n# install EESSI configuration for CernVM-FS\nwget https://github.com/EESSI/filesystem-layer/releases/download/latest/cvmfs-config-eessi_latest_all.deb\nsudo dpkg -i cvmfs-config-eessi_latest_all.deb\n\n# create client configuration file for CernVM-FS (no squid proxy, 10GB local CernVM-FS client cache)\nsudo bash -c \"echo 'CVMFS_CLIENT_PROFILE=\"single\"' > /etc/cvmfs/default.local\"\nsudo bash -c \"echo 'CVMFS_QUOTA_LIMIT=10000' >> /etc/cvmfs/default.local\"\n\n# make sure that EESSI CernVM-FS repository is accessible\nsudo cvmfs_config setup\n

Note

The default location for the cache directory is /var/lib/cvmfs. Please, check that the partition on which this directory is stored is big enough to store the cache (and other data). You may override this by adding CVMFS_CACHE_BASE=<some other directory for the cache> to your default.local, e.g., running

sudo bash -c \"echo 'CVMFS_CACHE_BASE=<some other directory for the cache>' >> /etc/cvmfs/default.local\"\n

"},{"location":"getting_access/native_installation/#installation-for-larger-systems-eg-clusters","title":"Installation for larger systems (e.g. clusters)","text":"

When using CernVM-FS on a larger number of local clients, e.g. on a HPC cluster or set of workstations, it is very strongly recommended to at least set up some Squid proxies close to your clients. These Squid proxies will be used to cache content that was recently accessed by your clients, which reduces the load on the Stratum 1 servers and reduces the latency for your clients. As a rule of thumb, you should use about one proxy per 500 clients, and have a minimum of two. Instructions for setting up a Squid proxy can be found in the CernVM-FS documentation and in the CernVM-FS tutorial.

Additionally, setting up a private Stratum 1, which will make a full copy of the repository, can be beneficial to improve the latency and bandwidth even further, and to be better protected against network outages. Instructions for setting up your own EESSI Stratum 1 can be found in setting up your own CernVM-FS Stratum 1 mirror server.

"},{"location":"getting_access/native_installation/#configuring-your-client-to-use-a-squid-proxy","title":"Configuring your client to use a Squid proxy","text":"

If you have set up one or more Squid proxies, you will have to add them to your CernVM-FS client configuration. This can be done by removing CVMFS_CLIENT_PROFILE=\"single\" from /etc/cvmfs/default.local, and add the following line:

CVMFS_HTTP_PROXY=\"http://ip-of-your-1st-proxy:port|http://ip-of-your-2nd-proxy:port\"\n

In this case, both proxies are equally preferable. More advanced use cases can be found in the CernVM-FS documentation.

"},{"location":"getting_access/native_installation/#configuring-your-client-to-use-a-private-stratum-1-mirror-server","title":"Configuring your client to use a private Stratum 1 mirror server","text":"

If you have set up your own Stratum 1 mirror server that replicates the EESSI CernVM-FS repositories, you can instruct your CernVM-FS client(s) to use it by prepending your newly created Stratum 1 to the existing list of EESSI Stratum 1 servers by creating a local CVMFS configuration file for the EESSI domain:

echo 'CVMFS_SERVER_URL=\"http://<url-or-ip-to-your-stratum1>/cvmfs/@fqrn@;$CVMFS_SERVER_URL\"' | sudo tee -a /etc/cvmfs/domain.d/eessi.io.local\n

It is also strongly recommended to disable the GEO API when using a private Stratum 1, because you want your private Stratum 1 to be picked first anyway. In order to do this, add the following to /etc/cvmfs/domain.d/eessi.io.local:

CVMFS_USE_GEOAPI=no\n

Note

By prepending your new Stratum 1 to the list of existing Stratum 1 servers and disabling the GEO API, your clients should by default use the private Stratum 1. In case of downtime of your private Stratum 1, they will also still be able to make use of the public EESSI Stratum 1 servers.

"},{"location":"getting_access/native_installation/#applying-changes-in-the-cernvm-fs-client-configuration-files","title":"Applying changes in the CernVM-FS client configuration files","text":"

After you have made any changes to the CernVM-FS client configuration, you will have to apply them. If this is the first time you set up the client, you can simply run:

sudo cvmfs_config setup\n

If you already had configured the client before, you can reload the configuration for the EESSI repository (or, similarly, for any other repository) using:

sudo cvmfs_config reload -c software.eessi.io\n
"},{"location":"known_issues/eessi-2023.06/","title":"Known issues","text":""},{"location":"known_issues/eessi-2023.06/#eessi-production-repository-v202306","title":"EESSI Production Repository (v2023.06)","text":""},{"location":"known_issues/eessi-2023.06/#failed-to-modify-ud-qp-to-init-on-mlx5_0-operation-not-permitted","title":"Failed to modify UD QP to INIT on mlx5_0: Operation not permitted","text":"

This is an error that occurs with OpenMPI after updating to OFED 23.10.

There is an upstream issue on this problem opened with EasyBuild. See: https://github.com/easybuilders/easybuild-easyconfigs/issues/20233

Workarounds

You can instruct OpenMPI to not use libfabric and turn off `uct`(see https://openucx.readthedocs.io/en/master/running.html#running-mpi) by passing the following options to `mpirun`:

mpirun -mca pml ucx -mca btl '^uct,ofi' -mca mtl '^ofi'\n
Or equivalently, you can set the following environment variables:
export OMPI_MCA_btl='^uct,ofi'\nexport OMPI_MCA_pml='ucx'\nexport OMPI_MCA_mtl='^ofi'\n
You may also set these additional environment variables via site-specific Lmod hooks:
require(\"strict\")\nlocal hook=require(\"Hook\")\n\n-- Fix Failed to modify UD QP to INIT on mlx5_0: Operation not permitted\nfunction fix_ud_qp_init_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('OMPI_MCA_btl', '^uct,ofi')\n        setenv('OMPI_MCA_pml', 'ucx')\n        setenv('OMPI_MCA_mtl', '^ofi')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    fix_ud_qp_init_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n
For more information about how to write and implement site-specific Lmod hooks, please check [EESSI Site Specific Configuration LMOD Hooks](site_specific_config/lmod_hooks.md)"},{"location":"known_issues/eessi-2023.06/#gcc-1220-and-foss-2022b-based-modules-cannot-be-loaded-on-zen4-architecture","title":"GCC-12.2.0 and foss-2022b based modules cannot be loaded on zen4 architecture","text":"

The zen4 architecture was released late 2022. As a result, the compilers and BLAS libraries that are part of the 2022b toolchain generation did not yet (fully) support this architecture. Concretely, it was found in this pr that unit tests in the OpenBLAS version that is part of the foss-2022b toolchain were failing. As a result, it was decided that we would not support this toolchain-generation at all on the zen4 architecture.

"},{"location":"meetings/2022-09-amsterdam/","title":"EESSI Community Meeting (Sept'22, Amsterdam)","text":""},{"location":"meetings/2022-09-amsterdam/#practical-info","title":"Practical info","text":""},{"location":"meetings/2022-09-amsterdam/#agenda","title":"Agenda","text":"

(subject to changes)

We envision a mix of presentations, experience reports, demos, and hands-on sessions and/or hackathons related to the EESSI project.

If you would like to give a talk or host a session, please let us know via the EESSI Slack!

"},{"location":"meetings/2022-09-amsterdam/#wed-14-sept-2022","title":"Wed 14 Sept 2022","text":""},{"location":"meetings/2022-09-amsterdam/#thu-15-sept-2022","title":"Thu 15 Sept 2022","text":""},{"location":"meetings/2022-09-amsterdam/#fri-16-sept-2022","title":"Fri 16 Sept 2022","text":""},{"location":"repositories/dev.eessi.io/","title":"Development repository (dev.eessi.io)","text":""},{"location":"repositories/dev.eessi.io/#what-is-deveessiio","title":"What is dev.eessi.io?","text":"

dev.eessi.io is the development repository of EESSI. With it, developers can deploy pre-release builds of their software to EESSI. This way, development versions of software can easily be tested on systems where the dev.eessi.io CernVM-FS repository is available.

On a system with dev.eessi.io mounted access is possible with module use /cvmfs/dev.eessi.io/versions/2023.06/modules/all. Then, all that is left is try out the development software!

"},{"location":"repositories/dev.eessi.io/#question-or-problems","title":"Question or problems","text":"

If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request. If you experience issues with the development repository, feel free to use the #dev.eessi.io channel of the EESSI Slack.

"},{"location":"repositories/dev.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.

"},{"location":"repositories/pilot/","title":"Pilot","text":""},{"location":"repositories/pilot/#pilot-software-stack-202112","title":"Pilot software stack (2021.12)","text":""},{"location":"repositories/pilot/#caveats","title":"Caveats","text":"

Danger

The EESSI pilot repository is no longer actively maintained, and should not be used for production work.

Please use the software.eessi.io repository instead.

The current EESSI pilot software stack (version 2021.12) is the 7th iteration, and there are some known issues and limitations, please take these into account:

Do not use it for production work, and be careful when testing it on production systems!

"},{"location":"repositories/pilot/#reporting-problems","title":"Reporting problems","text":"

If you notice any problems, please report them via https://github.com/EESSI/software-layer/issues.

"},{"location":"repositories/pilot/#accessing-the-eessi-pilot-repository-through-singularity","title":"Accessing the EESSI pilot repository through Singularity","text":"

The easiest way to access the EESSI pilot repository is by using Singularity. If Singularity is installed already, no admin privileges are required. No other software is needed either on the host.

A container image is available in the GitHub Container Registry (see https://github.com/EESSI/filesystem-layer/pkgs/container/client-pilot). It only contains a minimal operating system + the necessary packages to access the EESSI pilot repository through CernVM-FS, and it is suitable for aarch64, ppc64le, and x86_64.

The container image can be used directly by Singularity (no prior download required), as follows:

To verify that things are working, check the contents of the /cvmfs/pilot.eessi-hpc.org/versions/2021.12 directory:

Singularity> ls /cvmfs/pilot.eessi-hpc.org/versions/2021.12\ncompat  init  software\n

"},{"location":"repositories/pilot/#standard-installation","title":"Standard installation","text":"

For those with privileges on their system, there are a number of example installation scripts for different architectures and operating systems available in the EESSI demo repository.

Here we prefer the Singularity approach as we can guarantee that the container image is up to date.

"},{"location":"repositories/pilot/#setting-up-the-eessi-environment","title":"Setting up the EESSI environment","text":"

Once you have the EESSI pilot repository mounted, you can set up the environment by sourcing the provided init script:

source /cvmfs/pilot.eessi-hpc.org/versions/2021.12/init/bash\n

If all goes well, you should see output like this:

Found EESSI pilot repo @ /cvmfs/pilot.eessi-hpc.org/versions/2021.12!\nUsing x86_64/intel/haswell as software subdirectory.\nUsing /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all as the directory to be added to MODULEPATH.\nFound Lmod configuration file at /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/.lmod/lmodrc.lua\nInitializing Lmod...\nPrepending /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI pilot software stack, have fun!\n[EESSI pilot 2021.12] $ \n

Now you're all set up! Go ahead and explore the software stack using \"module avail\", and go wild with testing the available software installations!

"},{"location":"repositories/pilot/#testing-the-eessi-pilot-software-stack","title":"Testing the EESSI pilot software stack","text":"

Please test the EESSI pilot software stack as you see fit: running simple commands, performing small calculations or running small benchmarks, etc.

Test scripts that have been verified to work correctly using the pilot software stack are available at https://github.com/EESSI/software-layer/tree/main/tests .

"},{"location":"repositories/pilot/#giving-feedback-or-reporting-problems","title":"Giving feedback or reporting problems","text":"

Any feedback is welcome, and questions or problems reports are welcome as well, through one of the EESSI communication channels:

"},{"location":"repositories/pilot/#available-software","title":"Available software","text":"

(last update: Mar 21st 2022)

EESSI currently supports the following HPC applications as well as all their dependencies:

[EESSI pilot 2021.12] $ module --nx avail\n\n--------------------------- /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/modules/all ----------------------------\n   ant/1.10.8-Java-11                                              LMDB/0.9.24-GCCcore-9.3.0\n   Arrow/0.17.1-foss-2020a-Python-3.8.2                            lz4/1.9.2-GCCcore-9.3.0\n   Bazel/3.6.0-GCCcore-9.3.0                                       Mako/1.1.2-GCCcore-9.3.0\n   Bison/3.5.3-GCCcore-9.3.0                                       MariaDB-connector-c/3.1.7-GCCcore-9.3.0\n   Boost/1.72.0-gompi-2020a                                        matplotlib/3.2.1-foss-2020a-Python-3.8.2\n   cairo/1.16.0-GCCcore-9.3.0                                      Mesa/20.0.2-GCCcore-9.3.0\n   CGAL/4.14.3-gompi-2020a-Python-3.8.2                            Meson/0.55.1-GCCcore-9.3.0-Python-3.8.2\n   CMake/3.16.4-GCCcore-9.3.0                                      METIS/5.1.0-GCCcore-9.3.0\n   CMake/3.20.1-GCCcore-10.3.0                                     MPFR/4.0.2-GCCcore-9.3.0\n   code-server/3.7.3                                               NASM/2.14.02-GCCcore-9.3.0\n   DB/18.1.32-GCCcore-9.3.0                                        ncdf4/1.17-foss-2020a-R-4.0.0\n   DB/18.1.40-GCCcore-10.3.0                                       netCDF-Fortran/4.5.2-gompi-2020a\n   double-conversion/3.1.5-GCCcore-9.3.0                           netCDF/4.7.4-gompi-2020a\n   Doxygen/1.8.17-GCCcore-9.3.0                                    nettle/3.6-GCCcore-9.3.0\n   EasyBuild/4.5.0                                                 networkx/2.4-foss-2020a-Python-3.8.2\n   EasyBuild/4.5.1                                         (D)     Ninja/1.10.0-GCCcore-9.3.0\n   Eigen/3.3.7-GCCcore-9.3.0                                       NLopt/2.6.1-GCCcore-9.3.0\n   Eigen/3.3.9-GCCcore-10.3.0                                      NSPR/4.25-GCCcore-9.3.0\n   ELPA/2019.11.001-foss-2020a                                     NSS/3.51-GCCcore-9.3.0\n   expat/2.2.9-GCCcore-9.3.0                                       nsync/1.24.0-GCCcore-9.3.0\n   expat/2.2.9-GCCcore-10.3.0                                      numactl/2.0.13-GCCcore-9.3.0\n   FFmpeg/4.2.2-GCCcore-9.3.0                                      numactl/2.0.14-GCCcore-10.3.0\n   FFTW/3.3.8-gompi-2020a                                          OpenBLAS/0.3.9-GCC-9.3.0\n   FFTW/3.3.9-gompi-2021a                                          OpenBLAS/0.3.15-GCC-10.3.0\n   flatbuffers/1.12.0-GCCcore-9.3.0                                OpenFOAM/v2006-foss-2020a\n   FlexiBLAS/3.0.4-GCC-10.3.0                                      OpenFOAM/8-foss-2020a                              (D)\n   fontconfig/2.13.92-GCCcore-9.3.0                                OpenMPI/4.0.3-GCC-9.3.0\n   foss/2020a                                                      OpenMPI/4.1.1-GCC-10.3.0\n   foss/2021a                                                      OpenPGM/5.2.122-GCCcore-9.3.0\n   freetype/2.10.1-GCCcore-9.3.0                                   OpenSSL/1.1                                        (D)\n   FriBidi/1.0.9-GCCcore-9.3.0                                     OSU-Micro-Benchmarks/5.6.3-gompi-2020a\n   GCC/9.3.0                                                       Pango/1.44.7-GCCcore-9.3.0\n   GCC/10.3.0                                                      ParaView/5.8.0-foss-2020a-Python-3.8.2-mpi\n   GCCcore/9.3.0                                                   PCRE/8.44-GCCcore-9.3.0\n   GCCcore/10.3.0                                                  PCRE2/10.34-GCCcore-9.3.0\n   Ghostscript/9.52-GCCcore-9.3.0                                  Perl/5.30.2-GCCcore-9.3.0\n   giflib/5.2.1-GCCcore-9.3.0                                      Perl/5.32.1-GCCcore-10.3.0\n   git/2.23.0-GCCcore-9.3.0-nodocs                                 pixman/0.38.4-GCCcore-9.3.0\n   git/2.32.0-GCCcore-10.3.0-nodocs                        (D)     pkg-config/0.29.2-GCCcore-9.3.0\n   GLib/2.64.1-GCCcore-9.3.0                                       pkg-config/0.29.2-GCCcore-10.3.0\n   GLPK/4.65-GCCcore-9.3.0                                         pkg-config/0.29.2                                  (D)\n   GMP/6.2.0-GCCcore-9.3.0                                         pkgconfig/1.5.1-GCCcore-9.3.0-Python-3.8.2\n   GMP/6.2.1-GCCcore-10.3.0                                        PMIx/3.1.5-GCCcore-9.3.0\n   gnuplot/5.2.8-GCCcore-9.3.0                                     PMIx/3.2.3-GCCcore-10.3.0\n   GObject-Introspection/1.64.0-GCCcore-9.3.0-Python-3.8.2         poetry/1.0.9-GCCcore-9.3.0-Python-3.8.2\n   gompi/2020a                                                     protobuf-python/3.13.0-foss-2020a-Python-3.8.2\n   gompi/2021a                                                     protobuf/3.13.0-GCCcore-9.3.0\n   groff/1.22.4-GCCcore-9.3.0                                      pybind11/2.4.3-GCCcore-9.3.0-Python-3.8.2\n   groff/1.22.4-GCCcore-10.3.0                                     pybind11/2.6.2-GCCcore-10.3.0\n   GROMACS/2020.1-foss-2020a-Python-3.8.2                          Python/2.7.18-GCCcore-9.3.0\n   GROMACS/2020.4-foss-2020a-Python-3.8.2                  (D)     Python/3.8.2-GCCcore-9.3.0\n   GSL/2.6-GCC-9.3.0                                               Python/3.9.5-GCCcore-10.3.0-bare\n   gzip/1.10-GCCcore-9.3.0                                         Python/3.9.5-GCCcore-10.3.0\n   h5py/2.10.0-foss-2020a-Python-3.8.2                             PyYAML/5.3-GCCcore-9.3.0\n   HarfBuzz/2.6.4-GCCcore-9.3.0                                    Qt5/5.14.1-GCCcore-9.3.0\n   HDF5/1.10.6-gompi-2020a                                         QuantumESPRESSO/6.6-foss-2020a\n   Horovod/0.21.3-foss-2020a-TensorFlow-2.3.1-Python-3.8.2         R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n   hwloc/2.2.0-GCCcore-9.3.0                                       R/4.0.0-foss-2020a\n   hwloc/2.4.1-GCCcore-10.3.0                                      re2c/1.3-GCCcore-9.3.0\n   hypothesis/6.13.1-GCCcore-10.3.0                                RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n   ICU/66.1-GCCcore-9.3.0                                          Rust/1.52.1-GCCcore-10.3.0\n   ImageMagick/7.0.10-1-GCCcore-9.3.0                              ScaLAPACK/2.1.0-gompi-2020a\n   IPython/7.15.0-foss-2020a-Python-3.8.2                          ScaLAPACK/2.1.0-gompi-2021a-fb\n   JasPer/2.0.14-GCCcore-9.3.0                                     scikit-build/0.10.0-foss-2020a-Python-3.8.2\n   Java/11.0.2                                             (11)    SciPy-bundle/2020.03-foss-2020a-Python-3.8.2\n   jbigkit/2.1-GCCcore-9.3.0                                       SciPy-bundle/2021.05-foss-2021a\n   JsonCpp/1.9.4-GCCcore-9.3.0                                     SCOTCH/6.0.9-gompi-2020a\n   LAME/3.100-GCCcore-9.3.0                                        snappy/1.1.8-GCCcore-9.3.0\n   libarchive/3.5.1-GCCcore-10.3.0                                 Spark/3.1.1-foss-2020a-Python-3.8.2\n   libcerf/1.13-GCCcore-9.3.0                                      SQLite/3.31.1-GCCcore-9.3.0\n   libdrm/2.4.100-GCCcore-9.3.0                                    SQLite/3.35.4-GCCcore-10.3.0\n   libevent/2.1.11-GCCcore-9.3.0                                   SWIG/4.0.1-GCCcore-9.3.0\n   libevent/2.1.12-GCCcore-10.3.0                                  Szip/2.1.1-GCCcore-9.3.0\n   libfabric/1.11.0-GCCcore-9.3.0                                  Tcl/8.6.10-GCCcore-9.3.0\n   libfabric/1.12.1-GCCcore-10.3.0                                 Tcl/8.6.11-GCCcore-10.3.0\n   libffi/3.3-GCCcore-9.3.0                                        tcsh/6.22.02-GCCcore-9.3.0\n   libffi/3.3-GCCcore-10.3.0                                       TensorFlow/2.3.1-foss-2020a-Python-3.8.2\n   libgd/2.3.0-GCCcore-9.3.0                                       time/1.9-GCCcore-9.3.0\n   libGLU/9.0.1-GCCcore-9.3.0                                      Tk/8.6.10-GCCcore-9.3.0\n   libglvnd/1.2.0-GCCcore-9.3.0                                    Tkinter/3.8.2-GCCcore-9.3.0\n   libiconv/1.16-GCCcore-9.3.0                                     UCX/1.8.0-GCCcore-9.3.0\n   libjpeg-turbo/2.0.4-GCCcore-9.3.0                               UCX/1.10.0-GCCcore-10.3.0\n   libpciaccess/0.16-GCCcore-9.3.0                                 UDUNITS/2.2.26-foss-2020a\n   libpciaccess/0.16-GCCcore-10.3.0                                UnZip/6.0-GCCcore-9.3.0\n   libpng/1.6.37-GCCcore-9.3.0                                     UnZip/6.0-GCCcore-10.3.0\n   libsndfile/1.0.28-GCCcore-9.3.0                                 WRF/3.9.1.1-foss-2020a-dmpar\n   libsodium/1.0.18-GCCcore-9.3.0                                  X11/20200222-GCCcore-9.3.0\n   LibTIFF/4.1.0-GCCcore-9.3.0                                     x264/20191217-GCCcore-9.3.0\n   libtirpc/1.2.6-GCCcore-9.3.0                                    x265/3.3-GCCcore-9.3.0\n   libunwind/1.3.1-GCCcore-9.3.0                                   xorg-macros/1.19.2-GCCcore-9.3.0\n   libxc/4.3.4-GCC-9.3.0                                           xorg-macros/1.19.3-GCCcore-10.3.0\n   libxml2/2.9.10-GCCcore-9.3.0                                    Xvfb/1.20.9-GCCcore-9.3.0\n   libxml2/2.9.10-GCCcore-10.3.0                                   Yasm/1.3.0-GCCcore-9.3.0\n   libyaml/0.2.2-GCCcore-9.3.0                                     ZeroMQ/4.3.2-GCCcore-9.3.0\n   LittleCMS/2.9-GCCcore-9.3.0                                     Zip/3.0-GCCcore-9.3.0\n   LLVM/9.0.1-GCCcore-9.3.0                                        zstd/1.4.4-GCCcore-9.3.0\n
"},{"location":"repositories/pilot/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":""},{"location":"repositories/pilot/#x86_64","title":"x86_64","text":""},{"location":"repositories/pilot/#aarch64arm64","title":"aarch64/arm64","text":""},{"location":"repositories/pilot/#ppc64le","title":"ppc64le","text":""},{"location":"repositories/pilot/#easybuild-configuration","title":"EasyBuild configuration","text":"

EasyBuild v4.5.1 was used to install the software in the 2021.12 version of the pilot repository. For some installations pull requests with changes that will be included in later EasyBuild versions were leveraged, see the build script that was used.

An example configuration of the build environment based on https://github.com/EESSI/software-layer can be seen here:

$ eb --show-config\n#\n# Current EasyBuild configuration\n# (C: command line argument, D: default value, E: environment variable, F: configuration file)\n#\nbuildpath         (E) = /tmp/eessi-build/easybuild/build\ncontainerpath     (E) = /tmp/eessi-build/easybuild/containers\ndebug             (E) = True\nfilter-deps       (E) = Autoconf, Automake, Autotools, binutils, bzip2, cURL, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, Lua, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars   (E) = LD_LIBRARY_PATH\nhooks             (E) = /home/eessi-build/software-layer/eb_hooks.py\nignore-osdeps     (E) = True\ninstallpath       (E) = /cvmfs/pilot.eessi-hpc.org/2021.06/software/linux/x86_64/intel/haswell\nmodule-extensions (E) = True\npackagepath       (E) = /tmp/eessi-build/easybuild/packages\nprefix            (E) = /tmp/eessi-build/easybuild\nrepositorypath    (E) = /tmp/eessi-build/easybuild/ebfiles_repo\nrobot-paths       (D) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/software/linux/x86_64/intel/haswell/software/EasyBuild/4.5.1/easybuild/easyconfigs\nrpath             (E) = True\nsourcepath        (E) = /tmp/eessi-build/easybuild/sources:\nsysroot           (E) = /cvmfs/pilot.eessi-hpc.org/versions/2021.12/compat/linux/x86_64\ntrace             (E) = True\nzip-logs          (E) = bzip2\n

"},{"location":"repositories/pilot/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the pilot repository is shown at http://status.eessi.io/pilot/.

"},{"location":"repositories/riscv.eessi.io/","title":"EESSI RISC-V development repository (riscv.eessi.io)","text":"

This repository contains development versions of an EESSI RISC-V software stack. Note that versions may be added, modified, or deleted at any time.

"},{"location":"repositories/riscv.eessi.io/#accessing-the-risc-v-repository","title":"Accessing the RISC-V repository","text":"

See Getting access; by making the EESSI CVMFS domain available, you will automatically have access to riscv.eessi.io as well.

"},{"location":"repositories/riscv.eessi.io/#using-riscveessiio","title":"Using riscv.eessi.io","text":"

This repository currently offers one version (20240402), and this contains both a compatibility layer and a software layer. Furthermore, initialization scripts are in place to set up the repository:

$ source /cvmfs/riscv.eessi.io/versions/20240402/init/bash\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $\n

You can even source the initialization script of the software.eessi.io production repository now, and it will automatically set up the RISC-V repository for you:

$ source /cvmfs/software.eessi.io/versions/2023.06/init/bash \nRISC-V architecture detected, but there is no RISC-V support yet in the production repository.\nAutomatically switching to version 20240402 of the RISC-V development repository /cvmfs/riscv.eessi.io.\nFor more details about this repository, see https://www.eessi.io/docs/repositories/riscv.eessi.io/.\n\nFound EESSI repo @ /cvmfs/riscv.eessi.io/versions/20240402!\narchdetect says riscv64/generic\nUsing riscv64/generic as software subdirectory.\nFound Lmod configuration file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/.lmod/SitePackage.lua\nUsing /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all as the site extension directory to be added to MODULEPATH.\nInitializing Lmod...\nPrepending /cvmfs/riscv.eessi.io/versions/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/riscv.eessi.io/host_injections/20240402/software/linux/riscv64/generic/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (20240402), have fun!\n{EESSI 20240402} $ \n

Note that we currently only provide generic builds, hence riscv64/generic is being used for all RISC-V CPUs.

The amount of software is constantly increasing. Besides having the foss/2023b toolchain available, applications like dlb, GROMACS, OSU Micro-Benchmarks, and R are already available as well. Use module avail to get a full and up-to-date listing of available software.

"},{"location":"repositories/riscv.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for this repository is shown at https://status.eessi.io.

"},{"location":"repositories/software.eessi.io/","title":"Production EESSI repository (software.eessi.io)","text":""},{"location":"repositories/software.eessi.io/#question-or-problems","title":"Question or problems","text":"

If you have any questions regarding EESSI, or if you experience a problem in accessing or using it, please open a support request.

"},{"location":"repositories/software.eessi.io/#accessing-the-eessi-repository","title":"Accessing the EESSI repository","text":"

See Getting access.

"},{"location":"repositories/software.eessi.io/#using-softwareeessiio","title":"Using software.eessi.io","text":"

See Using EESSI.

"},{"location":"repositories/software.eessi.io/#available-software","title":"Available software","text":"

See Available software.

"},{"location":"repositories/software.eessi.io/#architecture-and-micro-architecture-support","title":"Architecture and micro-architecture support","text":"

See CPU targets.

"},{"location":"repositories/software.eessi.io/#infrastructure-status","title":"Infrastructure status","text":"

The status of the CernVM-FS infrastructure for the production repository is shown at https://status.eessi.io.

"},{"location":"site_specific_config/gpu/","title":"GPU support","text":"

More information on the actions that must be performed to ensure that GPU software included in EESSI can use the GPU in your system is available below.

Please open a support issue if you need help or have questions regarding GPU support.

Make sure the ${EESSI_VERSION} version placeholder is defined!

In this page, we use ${EESSI_VERSION} as a placeholder for the version of the EESSI repository, for example:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}\n

Before inspecting paths, or executing any of the specified commands, you should define $EESSI_VERSION first, for example with:

export EESSI_VERSION=2023.06\n

"},{"location":"site_specific_config/gpu/#nvidia","title":"Support for using NVIDIA GPUs","text":"

EESSI supports running CUDA-enabled software. All CUDA-enabled modules are marked with the (gpu) feature, which is visible in the output produced by module avail.

"},{"location":"site_specific_config/gpu/#nvidia_drivers","title":"NVIDIA GPU drivers","text":"

For CUDA-enabled software to run, it needs to be able to find the NVIDIA GPU drivers of the host system. The challenge here is that the NVIDIA GPU drivers are not always in a standard system location, and that we can not install the GPU drivers in EESSI (since they are too closely tied to the client OS and GPU hardware).

"},{"location":"site_specific_config/gpu/#cuda_sdk","title":"Compiling CUDA software","text":"

An additional requirement is necessary if you want to be able to compile CUDA-enabled software using a CUDA installation included in EESSI. This requires a full CUDA SDK, but the CUDA SDK End User License Agreement (EULA) does not allow for full redistribution. In EESSI, we are (currently) only allowed to redistribute the files needed to run CUDA software.

Full CUDA SDK only needed to compile CUDA software

Without a full CUDA SDK on the host system, you will still be able to run CUDA-enabled software from the EESSI stack, you just won't be able to compile additional CUDA software.

Below, we describe how to make sure that the EESSI software stack can find your NVIDIA GPU drivers and (optionally) full installations of the CUDA SDK.

"},{"location":"site_specific_config/gpu/#driver_location","title":"Configuring CUDA driver location","text":"

All CUDA-enabled software in EESSI expects the CUDA drivers to be available in a specific subdirectory of this host_injections directory. In addition, installations of the CUDA SDK included EESSI are stripped down to the files that we are allowed to redistribute; all other files are replaced by symbolic links that point to another specific subdirectory of host_injections. For example:

$ ls -l /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\nlrwxrwxrwx 1 cvmfs cvmfs 109 Dec 21 14:49 /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc -> /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen3/software/CUDA/12.1.1/bin/nvcc\n

If the corresponding full installation of the CUDA SDK is available there, the CUDA installation included in EESSI can be used to build CUDA software.

"},{"location":"site_specific_config/gpu/#nvidia_eessi_native","title":"Using NVIDIA GPUs via a native EESSI installation","text":"

Here, we describe the steps to enable GPU support when you have a native EESSI installation on your system.

Required permissions

To enable GPU support for EESSI on your system, you will typically need to have system administration rights, since you need write permissions on the folder to the target directory of the host_injections symlink.

"},{"location":"site_specific_config/gpu/#exposing-nvidia-gpu-drivers","title":"Exposing NVIDIA GPU drivers","text":"

To install the symlinks to your GPU drivers in host_injections, run the link_nvidia_host_libraries.sh script that is included in EESSI:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/link_nvidia_host_libraries.sh\n

This script uses ldconfig on your host system to locate your GPU drivers, and creates symbolic links to them in the correct location under host_injections directory. It also stores the CUDA version supported by the driver that the symlinks were created for.

Re-run link_nvidia_host_libraries.sh after NVIDIA GPU driver update

You should re-run this script every time you update the NVIDIA GPU drivers on the host system.

Note that it is safe to re-run the script even if no driver updates were done: the script should detect that the current version of the drivers were already symlinked.

"},{"location":"site_specific_config/gpu/#installing-full-cuda-sdk-optional","title":"Installing full CUDA SDK (optional)","text":"

To install a full CUDA SDK under host_injections, use the install_cuda_host_injections.sh script that is included in EESSI:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh\n

For example, to install CUDA 12.1.1 in the directory that the host_injections variant symlink points to, using /tmp/$USER/EESSI as directory to store temporary files:

/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --cuda-version 12.1.1 --temp-dir /tmp/$USER/EESSI --accept-cuda-eula\n
You should choose the CUDA version you wish to install according to what CUDA versions are included in EESSI; see the output of module avail CUDA/ after setting up your environment for using EESSI.

You can run /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/scripts/gpu_support/nvidia/install_cuda_host_injections.sh --help to check all of the options.

Tip

This script uses EasyBuild to install the CUDA SDK. For this to work, two requirements need to be satisfied:

You can rely on the EasyBuild installation that is included in EESSI for this.

Alternatively, you may load an EasyBuild module manually before running the install_cuda_host_injections.sh script to make an eb command available.

"},{"location":"site_specific_config/gpu/#nvidia_eessi_container","title":"Using NVIDIA GPUs via EESSI in a container","text":"

We focus here on the Apptainer/Singularity use case, and have only tested the --nv option to enable access to GPUs from within the container.

If you are using the EESSI container to access the EESSI software, the procedure for enabling GPU support is slightly different and will be documented here eventually.

"},{"location":"site_specific_config/gpu/#exposing-nvidia-gpu-drivers_1","title":"Exposing NVIDIA GPU drivers","text":"

When running a container with apptainer or singularity it is not necessary to run the install_cuda_host_injections.sh script since both these tools use $LD_LIBRARY_PATH internally in order to make the host GPU drivers available in the container.

The only scenario where this would be required is if $LD_LIBRARY_PATH is modified or undefined.

"},{"location":"site_specific_config/gpu/#gpu_cuda_testing","title":"Testing the GPU support","text":"

The quickest way to test if software installations included in EESSI can access and use your GPU is to run the deviceQuery executable that is part of the CUDA-Samples module:

module load CUDA-Samples\ndeviceQuery\n
If both are successful, you should see information about your GPU printed to your terminal.

"},{"location":"site_specific_config/host_injections/","title":"How to configure EESSI","text":""},{"location":"site_specific_config/host_injections/#why-configuration-is-necessary","title":"Why configuration is necessary","text":"

Just installing EESSI is enough to get started with the EESSI software stack on a CPU-based system. However, additional configuration is necessary in many other cases, such as - enabling GPU support on GPU-based systems - site-specific configuration / tuning of the MPI libraries provided by EESSI - overriding EESSI's MPI library with an ABI compatible host MPI

"},{"location":"site_specific_config/host_injections/#the-host_injections-variant-symlink","title":"The host_injections variant symlink","text":"

To allow such site-specific configuration, the EESSI repository includes a special directory where system administrations can install files that can be picked up by the software installations included in EESSI. This special directory is located in /cvmfs/software.eessi.io/host_injections, and it is a CernVM-FS Variant Symlink: a symbolic link for which the target can be controlled by the CernVM-FS client configuration (for more info, see 'Variant Symlinks' in the official CernVM-FS documentation).

Default target for host_injections variant symlink

Unless otherwise configured in the CernVM-FS client configuration for the EESSI repository, the host_injections symlink points to /opt/eessi on the client system:

$ ls -l /cvmfs/software.eessi.io/host_injections\nlrwxrwxrwx 1 cvmfs cvmfs 10 Oct  3 13:51 /cvmfs/software.eessi.io/host_injections -> /opt/eessi\n

The target for this symlink can be controlled by setting the EESSI_HOST_INJECTIONS variable in your local CVMFS configuration for EESSI. E.g.

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/' > /etc/cvmfs/domain.d/eessi.io.local\"\n

Don't forget to reload the CernVM-FS configuration

After making a change to a CernVM-FS configuration file, you also need to reload the configuration:

sudo cvmfs_config reload\n

On a heterogeneous system, you may want to use different targets for the variant symlink for different node types. For example, you might have two types of GPU nodes (gpu1 and gpu2) for which the GPU drivers are not in the same location, or not of the same version. Since those are both things we configure under host_injections, you'll need separate host_injections directories for each node type. That can easily be achieved by putting e.g.

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/gpu1/' > /etc/cvmfs/domain.d/eessi.io.local\"\n

in the CVMFS config on the gpu1 nodes, and

sudo bash -c \"echo 'EESSI_HOST_INJECTIONS=/shared_fs/path/to/host/injections/gpu2/' > /etc/cvmfs/domain.d/eessi.io.local\"\n
in the CVMFS config on the gpu2 nodes.

"},{"location":"site_specific_config/lmod_hooks/","title":"Configuring site-specific Lmod hooks","text":"

You may want to customize what happens when certain modules are loaded, for example, you may want to set additional environment variables. This is possible with LMOD hooks. A typical example would be when you want to tune the OpenMPI module for your system by setting additional environment variables when an OpenMPI module is loaded.

"},{"location":"site_specific_config/lmod_hooks/#location-of-the-hooks","title":"Location of the hooks","text":"

The EESSI software stack provides its own set of hooks in $LMOD_PACKAGE_PATH/SitePackage.lua. This SitePackage.lua also searches for site-specific hooks in two additional locations:

The first allows for hooks that need to be executed for that system, irrespective of the CPU architecture. The second allows for hooks specific to a certain architecture.

"},{"location":"site_specific_config/lmod_hooks/#architecture-independent-hooks","title":"Architecture-independent hooks","text":"

Hooks are written in Lua and can use any of the standard Lmod functionality as described in the Lmod documentation. While there are many types of hooks, you most likely want to specify a load or unload hook. Note that the EESSI hooks provide a nice example of what you can do with hooks. Here, as an example, we will define a load hook that environment variable MY_ENV_VAR to 1 whenever an OpenMPI module is loaded.

First, you typically want to load the necessary Lua packages:

-- $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/.lmod/SitePackage.lua\n\n-- The Strict package checks for the use of undeclared variables:\nrequire(\"strict\")\n\n-- Load the Lmod Hook package\nlocal hook=require(\"Hook\")\n

Next, we define a function that we want to use as a hook. Unfortunately, registering multiple hooks of the same type (e.g. multiple load hooks) is only supported in Lmod 8.7.35+. EESSI version 2023.06 uses Lmod 8.7.30. Thus, we define our function without the local keyword, so that we can still add to it later in an architecture-specific hook (if we wanted to):

-- Define a function for the hook\n-- Note that we define this without 'local' keyword\n-- That way we can still add to this function in an architecture-specific hook\nfunction set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n

for the same reason that multiple hooks cannot be registered, we need to combine this function for our site-specific (architecture-independent) with the function that specifies the EESSI load hook. Note that all EESSI hooks will be called eessi_<hook_type>_hook by convention.

-- Registering multiple hook functions, e.g. multiple load hooks is only supported in Lmod 8.7.35+\n-- EESSI version 2023.06 uses lmod 8.7.30. Thus, we first have to combine all functions into a single one,\n-- before registering it as a hook\nlocal function combined_load_hook(t)\n    -- Call the EESSI load hook (if it exists)\n    -- Note that if you wanted to overwrite the EESSI hooks (not recommended!), you would omit this\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    -- Call the site-specific load hook\n    set_my_env_var_openmpi(t)\nend\n

Then, we can finally register this function as an Lmod hook:

hook.register(\"load\", combined_load_hook)\n

Thus, our complete $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/.lmod/SitePackage.lua now looks like this (omitting the comments):

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nfunction set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    set_my_env_var_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n

Note that for future EESSI versions, if they use Lmod 8.7.35+, this would be simplified to:

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_ENV_VAR', '1')\n    end\nend\n\nhook.register(\"load\", set_my_env_var_openmpi, \"append\")\n
"},{"location":"site_specific_config/lmod_hooks/#architecture-dependent-hooks","title":"Architecture-dependent hooks","text":"

Now, assume that in addition we want to set an environment variable MY_SECOND_ENV_VAR to 5, but only for nodes that have the zen3 architecture. First, again, you typically want to load the necessary Lua packages:

-- $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/software/linux/x86_64/amd/zen3/.lmod/SitePackage.lua\n\n-- The Strict package checks for the use of undeclared variables:\nrequire(\"strict\")\n\n-- Load the Lmod Hook package\nlocal hook=require(\"Hook\")\n

Next, we define the function for the hook itself

-- Define a function for the hook\n-- This time, we can define it as a local function, as there are no hooks more specific than this \nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n

Then, we combine the functions into one

local function combined_load_hook(t)\n    -- Call the EESSI load hook first\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    -- Then call the architecture-independent load hook\n    if set_my_env_var_openmpi(t) ~= nil then\n        set_my_env_var_openmpi(t)\n    end\n    -- And finally the architecture-dependent load hook we just defined\n    set_my_second_env_var_openmpi(t)\nend\n

before finally registering it as an Lmod hook

hook.register(\"load\", combined_load_hook)\n

Thus, our full $EESSI_CVMFS_REPO/host_injections/$EESSI_VERSION/software/linux/x86_64/amd/zen3/.lmod/SitePackage.lua now looks like this (omitting the comments):

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n\nlocal function combined_load_hook(t)\n    if eessi_load_hook ~= nil then\n        eessi_load_hook(t)\n    end\n    if set_my_env_var_openmpi(t) ~= nil then\n        set_my_env_var_openmpi(t)\n    end\n    set_my_second_env_var_openmpi(t)\nend\n\nhook.register(\"load\", combined_load_hook)\n

Again, note that for future EESSI versions, if they use Lmod 8.7.35+, this would simplify to

require(\"strict\")\nlocal hook=require(\"Hook\")\n\nlocal function set_my_second_env_var_openmpi(t)\n    local simpleName = string.match(t.modFullName, \"(.-)/\")\n    if simpleName == 'OpenMPI' then\n        setenv('MY_SECOND_ENV_VAR', '5')\n    end\nend\n\nhook.register(\"load\", set_my_second_var_openmpi, \"append\")\n
"},{"location":"software_layer/build_nodes/","title":"Build nodes","text":"

Any system can be used as a build node to create additional software installations that should be added to the EESSI CernVM-FS repository.

"},{"location":"software_layer/build_nodes/#requirements","title":"Requirements","text":"

OS and software:

Admin privileges are not required, as long as Singularity is installed.

Resources:

Instructions to install Singularity and screen (click to show commands):

CentOS 8 (x86_64 or aarch64 or ppc64le)
sudo dnf install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm\nsudo dnf update -y\nsudo dnf install -y screen singularity\n
"},{"location":"software_layer/build_nodes/#setting-up-the-container","title":"Setting up the container","text":"

Warning

It is highly recommended to start a screen or tmux session first!

A container image is provided that includes everything that is required to set up a writable overlay on top of the EESSI CernVM-FS repository.

First, pick a location on a local filesystem for the temporary directory:

Requirements:

NB. If you are going to install on a separate drive (due to lack of space on /), then you need to set some variables to point to that location. You will also need to bind mount it in the singularity command. Let's say that you drive is mounted in /srt. Then you change the relevant commands below to this:

export EESSI_TMPDIR=/srt/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\nmkdir /srt/tmp\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs,/srt/tmp:/tmp\"\nsingularity shell -B /srt --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian11\n

We will assume that /tmp/$USER/EESSI meets these requirements:

export EESSI_TMPDIR=/tmp/$USER/EESSI\nmkdir -p $EESSI_TMPDIR\n

Create some subdirectories in this temporary directory:

mkdir -p $EESSI_TMPDIR/{home,overlay-upper,overlay-work}\nmkdir -p $EESSI_TMPDIR/{var-lib-cvmfs,var-run-cvmfs}\n

Configure Singularity cache directory, bind mounts, and (fake) home directory:

export SINGULARITY_CACHEDIR=$EESSI_TMPDIR/singularity_cache\nexport SINGULARITY_BIND=\"$EESSI_TMPDIR/var-run-cvmfs:/var/run/cvmfs,$EESSI_TMPDIR/var-lib-cvmfs:/var/lib/cvmfs\"\nexport SINGULARITY_HOME=\"$EESSI_TMPDIR/home:/home/$USER\"\n

Define values to pass to --fusemount` insingularity`` command:

export EESSI_READONLY=\"container:cvmfs2 software.eessi.io /cvmfs_ro/software.eessi.io\"\nexport EESSI_WRITABLE_OVERLAY=\"container:fuse-overlayfs -o lowerdir=/cvmfs_ro/software.eessi.io -o upperdir=$EESSI_TMPDIR/overlay-upper -o workdir=$EESSI_TMPDIR/overlay-work /cvmfs/software.eessi.io\"\n

Start the container (which includes Debian 11, CernVM-FS and fuse-overlayfs):

singularity shell --fusemount \"$EESSI_READONLY\" --fusemount \"$EESSI_WRITABLE_OVERLAY\" docker://ghcr.io/eessi/build-node:debian10\n

Once the container image has been downloaded and converted to a Singularity image (SIF format), you should get a prompt like this:

...\nCernVM-FS: loading Fuse module... done\n\nSingularity>\n

and the EESSI CernVM-FS repository should be mounted:

Singularity> ls /cvmfs/software.eessi.io\nhost_injections  README.eessi  versions\n
"},{"location":"software_layer/build_nodes/#setting-up-the-environment","title":"Setting up the environment","text":"

Set up the environment by starting a Gentoo Prefix session using the startprefix command.

Make sure you use the correct version of the EESSI repository!

export EESSI_VERSION='2023.06' \n/cvmfs/software.eessi.io/versions/${EESSI_VERSION}/compat/linux/$(uname -m)/startprefix\n
"},{"location":"software_layer/build_nodes/#installing-software","title":"Installing software","text":"

Clone the software-layer repository:

git clone https://github.com/EESSI/software-layer.git\n

Run the software installation script in software-layer:

cd software-layer\n./EESSI-install-software.sh\n

This script will figure out the CPU microarchitecture of the host automatically (like x86_64/intel/haswell).

To build generic software installations (like x86_64/generic), use the --generic option:

./EESSI-install-software.sh --generic\n

Once all missing software has been installed, you should see a message like this:

No missing modules!\n
"},{"location":"software_layer/build_nodes/#creating-tarball-to-ingest","title":"Creating tarball to ingest","text":"

Before tearing down the build node, you should create tarball to ingest into the EESSI CernVM-FS repository.

To create a tarball of all installations, assuming your build host is x86_64/intel/haswell:

export EESSI_VERSION='2023.06'\ncd /cvmfs/software.eessi.io/versions/${EESSI_VERSION}/software/linux\neessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell.tar.gz\"\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell\n

To create a tarball for specific installations, make sure you pick up both the software installation directories and the corresponding module files:

eessi_tar_gz=\"$HOME/eessi-${EESSI_VERSION}-haswell-OpenFOAM.tar.gz\"\n\ntar cvfz ${eessi_tar_gz} x86_64/intel/haswell/software/OpenFOAM modules/all//OpenFOAM\n

This tarball should be uploaded to the Stratum 0 server for ingestion. If needed, you can ask for help in the EESSI #software-layer Slack channel

"},{"location":"software_layer/cpu_targets/","title":"CPU targets","text":"

In the 2023.06 version of the EESSI repository, the following CPU microarchitectures are supported.

The names of these CPU targets correspond to the names used by archspec.

"},{"location":"talks/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"

AWS HPC Tech Short (~8 min.) - 15 June 2023

"},{"location":"talks/2023/20230615_aws_tech_short/","title":"Making scientific software EESSI - and fast","text":"

AWS HPC Tech Short (~8 min.) - 15 June 2023

"},{"location":"talks/2023/20231027_packagingcon23_eessi/","title":"Streaming optimized scientific software installations on any Linux distro with EESSI","text":""},{"location":"talks/2023/20231204_cvmfs_hpc/","title":"Best Practices for CernVM-FS in HPC","text":""},{"location":"talks/2023/20231205_castiel2_eessi_intro/","title":"Streaming Optimised Scientific Software: an Introduction to EESSI","text":""},{"location":"test-suite/","title":"EESSI test suite","text":"

The EESSI test suite is a collection of tests that are run using ReFrame. It is used to check whether the software installations included in the EESSI software layer are working and performing as expected.

To get started, you should look into the installation and configuration guidelines first.

To write the ReFrame configuration file for your system, check ReFrame configuration file.

For which software tests are available, see available-tests.md.

For more information on using the EESSI test suite, see here.

See also release notes for the EESSI test suite.

"},{"location":"test-suite/ReFrame-configuration-file/","title":"ReFrame configuration file","text":"

In order for ReFrame to run tests on your system, it needs to know some properties about your system. For example, it needs to know what kind of job scheduler you have, which partitions the system has, how to submit to those partitions, etc. All of this has to be described in a ReFrame configuration file (see also the section on $RFM_CONFIG_FILES).

This page is organized as follows:

"},{"location":"test-suite/ReFrame-configuration-file/#available-reframe-configuration-files","title":"Available ReFrame configuration files","text":"

There are some available ReFrame configuration files for HPC systems and public cloud in the config directory for more inspiration. Below is a simple ReFrame configuration file with minimal changes required for getting you started on using the test suite for a CPU partition. Please check that stagedir is set to a path on a (shared) scratch filesystem for storing (temporary) files related to the tests, and access is set to a list of arguments that you would normally pass to the scheduler when submitting to this partition (for example '-p cpu' for submitting to a Slurm partition called cpu).

To write a ReFrame configuration file for your system, check the section How to write a ReFrame configuration file.

\"\"\"\nsimple ReFrame configuration file\n\"\"\"\nimport os\n\nfrom eessi.testsuite.common_config import common_logging_config, common_eessi_init, format_perfvars, perflog_format\nfrom eessi.testsuite.constants import *  \n\nsite_configuration = {\n    'systems': [\n        {\n            'name': 'cpu_partition',\n            'descr': 'CPU partition',\n            'modules_system': 'lmod',\n            'hostnames': ['*'],\n            # Note that the stagedir should be a shared directory available on all nodes running ReFrame tests\n            'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n            'partitions': [\n                {\n                    'name': 'cpu_partition',\n                    'descr': 'CPU partition',\n                    'scheduler': 'slurm',\n                    'launcher': 'mpirun',\n                    'access':  ['-p cpu', '--export=None'],\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'resources': [\n                        {\n                            'name': 'memory',\n                            'options': ['--mem={size}'],\n                        }\n                    ],\n                    'features': [\n                        FEATURES[CPU]\n                    ] + list(SCALES.keys()),\n                }\n            ]\n        },\n    ],\n    'environments': [\n        {\n            'name': 'default',\n            'cc': 'cc',\n            'cxx': '',\n            'ftn': '',\n        },\n    ],\n    'logging': common_logging_config(),\n    'general': [\n        {\n            # Enable automatic detection of CPU architecture for each partition\n            # See https://reframe-hpc.readthedocs.io/en/stable/configure.html#auto-detecting-processor-information\n            'remote_detect': True,\n        }\n    ],\n}\n\n# optional logging to syslog\nsite_configuration['logging'][0]['handlers_perflog'].append({\n    'type': 'syslog',\n    'address': '/dev/log',\n    'level': 'info',\n    'format': f'reframe: {perflog_format}',\n    'format_perfvars': format_perfvars,\n    'append': True,\n})\n
"},{"location":"test-suite/ReFrame-configuration-file/#verifying-your-reframe-configuration","title":"Verifying your ReFrame configuration","text":"

To verify the ReFrame configuration, you can query the configuration using --show-config.

To see the full configuration, use:

reframe --show-config\n

To only show the configuration of a particular system partition, you can use the --system option. To query a specific setting, you can pass an argument to --show-config.

For example, to show the configuration of the gpu partition of the example system:

reframe --system example:gpu --show-config systems/0/partitions\n

You can drill it down further to only show the value of a particular configuration setting.

For example, to only show the launcher value for the gpu partition of the example system:

reframe --system example:gpu --show-config systems/0/partitions/@gpu/launcher\n
"},{"location":"test-suite/ReFrame-configuration-file/#write-reframe-config","title":"How to write a ReFrame configuration file","text":"

The official ReFrame documentation provides the full description on configuring ReFrame for your site. However, there are some configuration settings that are specifically required for the EESSI test suite. Also, there are a large amount of configuration settings available in ReFrame, which makes the official documentation potentially a bit overwhelming.

Here, we will describe how to create a configuration file that works with the EESSI test suite, starting from an example configuration file settings_example.py, which defines the most common configuration settings.

"},{"location":"test-suite/ReFrame-configuration-file/#python-imports","title":"Python imports","text":"

The EESSI test suite standardizes a few string-based values as constants, as well as the logging format used by ReFrame. Every ReFrame configuration file used for running the EESSI test suite should therefore start with the following import statements:

from eessi.testsuite.common_config import common_logging_config, common_eessi_init\nfrom eessi.testsuite.constants import *\n
"},{"location":"test-suite/ReFrame-configuration-file/#high-level-system-info-systems","title":"High-level system info (systems)","text":"

First, we describe the system at its highest level through the systems keyword.

You can define multiple systems in a single configuration file (systems is a Python list value). We recommend defining just a single system in each configuration file, as it makes the configuration file a bit easier to digest (for humans).

An example of the systems section of the configuration file would be:

site_configuration = {\n    'systems': [\n    # We could list multiple systems. Here, we just define one\n        {\n            'name': 'example',\n            'descr': 'Example cluster',\n            'modules_system': 'lmod',\n            'hostnames': ['*'],\n            'stagedir': f'/some/shared/dir/{os.environ.get(\"USER\")}/reframe_output/staging',\n            'partitions': [...],\n        }\n    ]\n}\n

The most common configuration items defined at this level are:

"},{"location":"test-suite/ReFrame-configuration-file/#partitions","title":"System partitions (systems.partitions)","text":"

The next step is to add the system partitions to the configuration files, which is also specified as a Python list since a system can have multiple partitions.

The partitions section of the configuration for a system with two Slurm partitions (one CPU partition, and one GPU partition) could for example look something like this:

site_configuration = {\n    'systems': [\n        {\n            ...\n            'partitions': [\n                {\n                    'name': 'cpu_partition',\n                    'descr': 'CPU partition'\n                    'scheduler': 'slurm',\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'launcher': 'mpirun',\n                    'access':  ['-p cpu'],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'features': [\n                        FEATURES[CPU]\n                    ] + list(SCALES.keys()),\n                },\n                {\n                    'name': 'gpu_partition',\n                    'descr': 'GPU partition'\n                    'scheduler': 'slurm',\n                    'prepare_cmds': ['source %s' % common_eessi_init()],\n                    'launcher': 'mpirun',\n                    'access':  ['-p gpu'],\n                    'environs': ['default'],\n                    'max_jobs': 4,\n                    'resources': [\n                        {\n                            'name': '_rfm_gpu',\n                            'options': ['--gpus-per-node={num_gpus_per_node}'],\n                        }\n                    ],\n                    'devices': [\n                        {\n                            'type': DEVICE_TYPES[GPU],\n                            'num_devices': 4,\n                        }\n                    ],\n                    'features': [\n                        FEATURES[CPU],\n                        FEATURES[GPU],\n                    ],\n                    'extras': {\n                        GPU_VENDOR: GPU_VENDORS[NVIDIA],\n                    },\n                },\n            ]\n        }\n    ]\n}\n

The most common configuration items defined at this level are:

Note that as more tests are added to the EESSI test suite, the use of features, devices and extras by the EESSI test suite may be extended, which may require an update of your configuration file to define newly recognized fields.

Note

Keep in mind that ReFrame partitions are virtual entities: they may or may not correspond to a partition as it is configured in your batch system. One might for example have a single partition in the batch system, but configure it as two separate partitions in the ReFrame configuration file based on additional constraints that are passed to the scheduler, see for example the AWS CitC example configuration.

The EESSI test suite (and more generally, ReFrame) assumes the hardware within a partition defined in the ReFrame configuration file is homogeneous.

"},{"location":"test-suite/ReFrame-configuration-file/#environments","title":"Environments","text":"

ReFrame needs a programming environment to be defined in its configuration file for tests that need to be compiled before they are run. While we don't have such tests in the EESSI test suite, ReFrame requires some programming environment to be defined:

site_configuration = {\n    ...\n    'environments': [\n        {\n            'name': 'default',  # Note: needs to match whatever we set for 'environs' in the partition\n            'cc': 'cc',\n            'cxx': '',\n            'ftn': '',\n        }\n    ]\n}\n

Note

The name here needs to match whatever we specified for the environs property of the partitions.

"},{"location":"test-suite/ReFrame-configuration-file/#logging","title":"Logging","text":"

ReFrame allows a large degree of control over what gets logged, and where. For convenience, we have created a common logging configuration in eessi.testsuite.common_config that provides a reasonable default. It can be used by importing common_logging_config and calling it as a function to define the 'logging setting:

from eessi.testsuite.common_config import common_logging_config\n\nsite_configuration = {\n    ...\n    'logging':  common_logging_config(),\n}\n
When combined by setting the $RFM_PREFIX environment variable, the output, performance log, and regular ReFrame logs will all end up in the directory specified by $RFM_PREFIX, which we recommend doing.

Alternatively, a prefix can be passed as an argument like common_logging_config(prefix), which will control where the regular ReFrame log ends up. Note that the performance logs do not respect this prefix: they will still end up in the standard ReFrame prefix (by default the current directory, unless otherwise set with $RFM_PREFIX or --prefix).

"},{"location":"test-suite/ReFrame-configuration-file/#cpu-auto-detection","title":"Auto-detection of processor information","text":"

You can let ReFrame auto-detect the processor information for your system.

"},{"location":"test-suite/ReFrame-configuration-file/#creation-of-topology-file-by-reframe","title":"Creation of topology file by ReFrame","text":"

ReFrame will automatically use auto-detection when two conditions are met:

  1. The partitions section of you configuration file does not specify processor information for a particular partition (as per our recommendation in the previous section);
  2. The remote_detect option is enabled in the general part of the configuration, as follows:
    site_configuration = {\n    'systems': ...\n    'logging': ...\n    'general': [\n        {\n            'remote_detect': True,\n        }\n    ]\n}\n

To trigger the auto-detection of processor information, it is sufficient to let ReFrame list the available tests:

reframe --list\n

ReFrame will store the processor information for your system in ~/.reframe/topology/<system>-<partition>/processor.json.

"},{"location":"test-suite/ReFrame-configuration-file/#create-topology-file","title":"Create topology file","text":"

You can also use the reframe option --detect-host-topology to create the topology file yourself.

Run the following command on the cluster of which you need the topology.

reframe --detect-host-topology[=FILE]\n

The output will be put in a file if this is specified or printed in the output. It will look something like this:

{\n  \"arch\": \"skylake_avx512\",\n  \"topology\": {\n    \"numa_nodes\": [\n      \"0x111111111\",\n      \"0x222222222\",\n      \"0x444444444\",\n      \"0x888888888\"\n    ],\n    \"sockets\": [\n      \"0x555555555\",\n      \"0xaaaaaaaaa\"\n    ],\n    \"cores\": [\n      \"0x000000001\",\n      \"0x000000002\",\n      \"0x000000004\",\n      \"0x000000008\",\n      \"0x000000010\",\n      \"0x000000020\",\n      \"0x000000040\",\n      \"0x000000080\",\n      \"0x000000100\",\n      \"0x000000200\",\n      \"0x000000400\",\n      \"0x000000800\",\n      \"0x000001000\",\n      \"0x000002000\",\n      \"0x000004000\",\n      \"0x000008000\",\n      \"0x000010000\",\n      \"0x000020000\",\n      \"0x000040000\",\n      \"0x000080000\",\n      \"0x000100000\",\n      \"0x000200000\",\n      \"0x000400000\",\n      \"0x000800000\",\n      \"0x001000000\",\n      \"0x002000000\",\n      \"0x004000000\",\n      \"0x008000000\",\n      \"0x010000000\",\n      \"0x020000000\",\n      \"0x040000000\",\n      \"0x080000000\",\n      \"0x100000000\",\n      \"0x200000000\",\n      \"0x400000000\",\n      \"0x800000000\"\n    ],\n    \"caches\": [\n      {\n        \"type\": \"L2\",\n        \"size\": 1048576,\n        \"linesize\": 64,\n        \"associativity\": 16,\n        \"num_cpus\": 1,\n        \"cpusets\": [\n          \"0x000000001\",\n          \"0x000000002\",\n          \"0x000000004\",\n          \"0x000000008\",\n          \"0x000000010\",\n          \"0x000000020\",\n          \"0x000000040\",\n          \"0x000000080\",\n          \"0x000000100\",\n          \"0x000000200\",\n          \"0x000000400\",\n          \"0x000000800\",\n          \"0x000001000\",\n          \"0x000002000\",\n          \"0x000004000\",\n          \"0x000008000\",\n          \"0x000010000\",\n          \"0x000020000\",\n          \"0x000040000\",\n          \"0x000080000\",\n          \"0x000100000\",\n          \"0x000200000\",\n          \"0x000400000\",\n          \"0x000800000\",\n          \"0x001000000\",\n          \"0x002000000\",\n          \"0x004000000\",\n          \"0x008000000\",\n          \"0x010000000\",\n          \"0x020000000\",\n          \"0x040000000\",\n          \"0x080000000\",\n          \"0x100000000\",\n          \"0x200000000\",\n          \"0x400000000\",\n          \"0x800000000\"\n        ]\n      },\n      {\n        \"type\": \"L1\",\n        \"size\": 32768,\n        \"linesize\": 64,\n        \"associativity\": 8,\n        \"num_cpus\": 1,\n        \"cpusets\": [\n          \"0x000000001\",\n          \"0x000000002\",\n          \"0x000000004\",\n          \"0x000000008\",\n          \"0x000000010\",\n          \"0x000000020\",\n          \"0x000000040\",\n          \"0x000000080\",\n          \"0x000000100\",\n          \"0x000000200\",\n          \"0x000000400\",\n          \"0x000000800\",\n          \"0x000001000\",\n          \"0x000002000\",\n          \"0x000004000\",\n          \"0x000008000\",\n          \"0x000010000\",\n          \"0x000020000\",\n          \"0x000040000\",\n          \"0x000080000\",\n          \"0x000100000\",\n          \"0x000200000\",\n          \"0x000400000\",\n          \"0x000800000\",\n          \"0x001000000\",\n          \"0x002000000\",\n          \"0x004000000\",\n          \"0x008000000\",\n          \"0x010000000\",\n          \"0x020000000\",\n          \"0x040000000\",\n          \"0x080000000\",\n          \"0x100000000\",\n          \"0x200000000\",\n          \"0x400000000\",\n          \"0x800000000\"\n        ]\n      },\n      {\n        \"type\": \"L3\",\n        \"size\": 25952256,\n        \"linesize\": 64,\n        \"associativity\": 11,\n        \"num_cpus\": 18,\n        \"cpusets\": [\n          \"0x555555555\",\n          \"0xaaaaaaaaa\"\n        ]\n      }\n    ]\n  },\n  \"num_cpus\": 36,\n  \"num_cpus_per_core\": 1,\n  \"num_cpus_per_socket\": 18,\n  \"num_sockets\": 2\n}\n

Note

ReFrame 4.5.1 will generate more parameter than it can parse. To resolve this issue you can remove the following parameters: vendor, model and/or platform.

For ReFrame to find the topology file it needs to be in the following path ~/.reframe/topology/<system_name>-<partition_name>/processor.json

"},{"location":"test-suite/available-tests/","title":"Available tests","text":"

The EESSI test suite currently includes tests for:

For a complete overview of all available tests in the EESSI test suite, see the eessi/testsuite/tests subdirectory in the EESSI/test-suite GitHub repository.

"},{"location":"test-suite/available-tests/#gromacs","title":"GROMACS","text":"

Several tests for GROMACS, a software package to perform molecular dynamics simulations, are included, which use the systems included in the HECBioSim benchmark suite:

It is implemented in tests/apps/gromacs.py, on top of the GROMACS test that is included in the ReFrame test library hpctestlib.

To run this GROMACS test with all HECBioSim systems, use:

reframe --run --name GROMACS\n

To run this GROMACS test only for a specific HECBioSim system, use for example:

reframe --run --name 'GROMACS.*HECBioSim/hEGFRDimerPair'\n

To run this GROMACS test with the smallest HECBioSim system (Crambin), you can use the CI tag:

reframe --run --name GROMACS --tag CI\n
"},{"location":"test-suite/available-tests/#tensorflow","title":"TensorFlow","text":"

A test for TensorFlow, a machine learning framework, is included, which is based on the \"Multi-worker training with Keras\" TensorFlow tutorial.

It is implemented in tests/apps/tensorflow/.

To run this TensorFlow test, use:

reframe --run --name TensorFlow\n

Warning

This test requires TensorFlow v2.11 or newer, using an older TensorFlow version will not work!

"},{"location":"test-suite/available-tests/#osumicrobenchmarks","title":"OSU Micro-Benchmarks","text":"

A test for OSU Micro-Benchmarks, which provides an MPI benchmark.

It is implemented in tests/apps/osu.py.

To run this Osu Micro-Benchmark, use:

reframe --run --name OSU-Micro-Benchmarks\n

Warning

This test requires OSU Micro-Benchmarks v5.9 or newer, using an older OSU -Micro-Benchmark version will not work!

"},{"location":"test-suite/available-tests/#espresso","title":"ESPResSo","text":"

A test for ESPResSo, a software package for performing and analysing scientific molecular dynamics simulations.

It is implemented in tests/apps/espresso/.

2 test cases are included: * P3M (ionic crystals) * LJ (Lennard Jones particle box)

Both tests are weak scaling tests and therefore the number of particles are scaled based on the number of MPI ranks.

To run this ESPResSo test, use:

reframe --run --name ESPResSo\n
"},{"location":"test-suite/available-tests/#quantumespresso","title":"QuantumESPRESSO","text":"

A test for QuantumESPRESSO, an integrated suite of computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials (both norm-conserving and ultrasoft).

It is implemented in tests/apps/QuantumESPRESSO.py.

To run this QuantumESPRESSO test, use:

reframe --run --name QuantumESPRESSO\n

Warning

This test requires ReFrame v4.6.0 or newer, in older versions the QuantumESPRESSO test is not included in hpctestlib!

"},{"location":"test-suite/installation-configuration/","title":"Installing and configuring the EESSI test suite","text":"

This page covers the requirements, installation and configuration of the EESSI test suite.

"},{"location":"test-suite/installation-configuration/#requirements","title":"Requirements","text":"

The EESSI test suite requires

"},{"location":"test-suite/installation-configuration/#installing-reframe","title":"Installing Reframe","text":"

General instructions for installing ReFrame are available in the ReFrame documentation. To check if ReFrame is available, run the reframe command:

reframe --version\n
(for more details on the ReFrame version requirement, click here)

Two important bugs were resolved in ReFrame's CPU autodetect functionality in version 4.3.3.

We strongly recommend you use ReFrame >= 4.3.3.

If you are using an older version of ReFrame, you may encounter some issues:

"},{"location":"test-suite/installation-configuration/#installing-reframe-test-library-hpctestlib","title":"Installing ReFrame test library (hpctestlib)","text":"

The EESSI test suite requires that the ReFrame test library (hpctestlib) is available, which is currently not included in a standard installation of ReFrame.

We recommend installing ReFrame using EasyBuild (version 4.8.1, or newer), or using a ReFrame installation that is available in the EESSI repository (version 2023.06, or newer).

For example (using EESSI):

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load ReFrame/4.3.3\n

To check whether the ReFrame test library is available, try importing a submodule of the hpctestlib Python package:

python3 -c 'import hpctestlib.sciapps.gromacs'\n
"},{"location":"test-suite/installation-configuration/#installation","title":"Installation","text":"

To install the EESSI test suite, you can either use pip or clone the GitHub repository directly:

"},{"location":"test-suite/installation-configuration/#pip-install","title":"Using pip","text":"
pip install git+https://github.com/EESSI/test-suite.git\n
"},{"location":"test-suite/installation-configuration/#cloning-the-repository","title":"Cloning the repository","text":"
git clone https://github.com/EESSI/test-suite $HOME/EESSI-test-suite\ncd EESSI-test-suite\nexport PYTHONPATH=$PWD:$PYTHONPATH\n
"},{"location":"test-suite/installation-configuration/#verify-installation","title":"Verify installation","text":"

To check whether the EESSI test suite installed correctly, try importing the eessi.testsuite Python package:

python3 -c 'import eessi.testsuite'\n
"},{"location":"test-suite/installation-configuration/#configuration","title":"Configuration","text":"

Before you can run the EESSI test suite, you need to create a configuration file for ReFrame that is specific to the system on which the tests will be run.

Example configuration files are available in the config subdirectory of the EESSI/test-suite GitHub repository](https://github.com/EESSI/test-suite/tree/main/config), which you can use as a template to create your own.

"},{"location":"test-suite/installation-configuration/#configuring-reframe-environment-variables","title":"Configuring ReFrame environment variables","text":"

We recommend setting a couple of $RFM_* environment variables to configure ReFrame, to avoid needing to include particular options to the reframe command over and over again.

"},{"location":"test-suite/installation-configuration/#RFM_CONFIG_FILES","title":"ReFrame configuration file ($RFM_CONFIG_FILES)","text":"

(see also RFM_CONFIG_FILES in ReFrame docs)

Define the $RFM_CONFIG_FILES environment variable to instruct ReFrame which configuration file to use, for example:

export RFM_CONFIG_FILES=$HOME/EESSI-test-suite/config/example.py\n

Alternatively, you can use the --config-file (or -C) reframe option.

See the section on the ReFrame configuration file for more information.

"},{"location":"test-suite/installation-configuration/#search-path-for-tests-rfm_check_search_path","title":"Search path for tests ($RFM_CHECK_SEARCH_PATH)","text":"

(see also RFM_CHECK_SEARCH_PATH in ReFrame docs)

Define the $RFM_CHECK_SEARCH_PATH environment variable to tell ReFrame which directory to search for tests.

In addition, define $RFM_CHECK_SEARCH_RECURSIVE to ensure that ReFrame searches $RFM_CHECK_SEARCH_PATH recursively (i.e. so that also tests in subdirectories are found).

For example:

export RFM_CHECK_SEARCH_PATH=$HOME/EESSI-test-suite/eessi/testsuite/tests\nexport RFM_CHECK_SEARCH_RECURSIVE=1\n

Alternatively, you can use the --checkpath (or -c) and --recursive (or -R) reframe options.

"},{"location":"test-suite/installation-configuration/#RFM_PREFIX","title":"ReFrame prefix ($RFM_PREFIX)","text":"

(see also RFM_PREFIX in ReFrame docs)

Define the $RFM_PREFIX environment variable to tell ReFrame where to store the files it produces. E.g.

export RFM_PREFIX=$HOME/reframe_runs\n

This involves:

Note that the default is for ReFrame to use the current directory as prefix. We recommend setting a prefix so that logs are not scattered around and nicely appended for each run.

If our common logging configuration is used, the regular ReFrame log file will also end up in the location specified by $RFM_PREFIX.

Warning

Using the --prefix option in your reframe command is not equivalent to setting $RFM_PREFIX, since our common logging configuration only picks up on the $RFM_PREFIX environment variable to determine the location for the ReFrame log file.

"},{"location":"test-suite/release-notes/","title":"Release notes for EESSI test suite","text":""},{"location":"test-suite/release-notes/#030-27-june-2024","title":"0.3.0 (27 june 2024)","text":"

This is a minor release of the EESSI test-suite

It includes:

"},{"location":"test-suite/release-notes/#020-7-march-2024","title":"0.2.0 (7 march 2024)","text":"

This is a minor release of the EESSI test-suite

It includes:

Bug fixes:

"},{"location":"test-suite/release-notes/#010-5-october-2023","title":"0.1.0 (5 October 2023)","text":"

Version 0.1.0 is the first release of the EESSI test suite.

It includes:

"},{"location":"test-suite/usage/","title":"Using the EESSI test suite","text":"

This page covers the usage of the EESSI test suite.

We assume you have already installed and configured the EESSI test suite on your system.

"},{"location":"test-suite/usage/#listing-available-tests","title":"Listing available tests","text":"

To list the tests that are available in the EESSI test suite, use reframe --list (or reframe -L for short).

If you have properly configured ReFrame, you should see a (potentially long) list of checks in the output:

$ reframe --list\n...\n[List of matched checks]\n- ...\nFound 123 check(s)\n

Note

When using --list, checks are only generated based on modules that are available in the system where the reframe command is invoked.

The system partitions specified in your ReFrame configuration file are not taken into account when using --list.

So, if --list produces an overview of 50 checks, and you have 4 system partitions in your configuration file, actually running the test suite may result in (up to) 200 checks being executed.

"},{"location":"test-suite/usage/#dry-run","title":"Performing a dry run","text":"

To perform a dry run of the EESSI test suite, use reframe --dry-run:

$ reframe --dry-run\n...\n[==========] Running 1234 check(s)\n\n[----------] start processing checks\n[ DRY      ] GROMACS_EESSI ...\n...\n[----------] all spawned checks have finished\n\n[  PASSED  ] Ran 1234/1234 test case(s) from 1234 check(s) (0 failure(s), 0 skipped, 0 aborted)\n

Note

When using --dry-run, the systems partitions listed in your ReFrame configuration file are also taken into account when generating checks, next to available modules and test parameters, which is not the case when using --list.

"},{"location":"test-suite/usage/#running-the-full-test-suite","title":"Running the (full) test suite","text":"

To actually run the (full) EESSI test suite and let ReFrame produce a performance report, use reframe --run --performance-report.

We strongly recommend filtering the checks that will be run by using additional options like --system, --name, --tag (see the 'Filtering tests' section below), and doing a dry run first to make sure that the generated checks correspond to what you have in mind.

"},{"location":"test-suite/usage/#reframe-output-and-log-files","title":"ReFrame output and log files","text":"

ReFrame will generate various output and log files:

We strongly recommend controlling where these files go by using the common logging configuration that is provided by the EESSI test suite in your ReFrame configuration file and setting $RFM_PREFIX (avoid using the cmd line option --prefix).

If you do, and if you use ReFrame v4.3.3 or more newer, you should find the output and log files at:

In the stage and output directories, there will be a subdirectory for each check that was run, which are tagged with a unique hash (like d3adb33f) that is determined based on the specific parameters for that check (see the ReFrame documentation for more details on the test naming scheme).

"},{"location":"test-suite/usage/#filtering-tests","title":"Filtering tests","text":"

By default, ReFrame will automatically generate checks for each system partition, based on the tests available in the EESSI test suite, available software modules, and tags defined in the EESSI test suite.

To avoid being overwhelmed by checks, it is recommend to apply filters so ReFrame only generates the checks you are interested in.

"},{"location":"test-suite/usage/#filter-name","title":"Filtering by test name","text":"

You can filter checks based on the full test name using the --name option (or -n), which includes the value for all test parameters.

Here's an example of a full test name:

GROMACS_EESSI %benchmark_info=HECBioSim/Crambin %nb_impl=cpu %scale=1_node %module_name=GROMACS/2023.1-foss-2022a /d3adb33f @example:gpu+default\n

To let ReFrame only generate checks for GROMACS, you can use:

reframe --name GROMACS\n

To only run GROMACS checks with a particular version of GROMACS, you can use --name to only retain specific GROMACS modules:

reframe --name %module_name=GROMACS/2023.1\n

Likewise, you can filter on any part of the test name.

You can also select one specific check using the corresponding test hash, which is also part of the full test name (see /d3adb33f in the example above): for example:

reframe --name /d3adb33f\n

The argument passed to --name is interpreted as a Python regular expression, so you can use wildcards like .*, character ranges like [0-9], use ^ to specify that the pattern should match from the start of the test name, etc.

Use --list or --dry-run to check the impact of using the --name option.

"},{"location":"test-suite/usage/#filter-system-partition","title":"Filtering by system (partition)","text":"

By default, ReFrame will generate checks for each system partition that is listed in your configuration file.

To only let ReFrame checks for a particular system or system partition, you can use the --system option.

For example:

Use --dry-run to check the impact of using the --system option.

"},{"location":"test-suite/usage/#filter-tag","title":"Filtering by tags","text":"

To filter tests using one or more tags, you can use the --tag option.

Using --list-tags you can get a list of known tags.

To check the impact of this on generated checks by ReFrame, use --list or --dry-run.

"},{"location":"test-suite/usage/#ci-tag","title":"CI tag","text":"

For each software that is included in the EESSI test suite, a small test is tagged with CI to indicate it can be used in a Continuous Integration (CI) environment.

Hence, you can use this tag to let ReFrame only generate checks for small test cases:

reframe --tag CI\n

For example:

$ reframe --name GROMACS --tag CI\n...\n
"},{"location":"test-suite/usage/#scale-tags","title":"scale tags","text":"

The EESSI test suite defines a set of custom tags that control the scale of checks, which specify many cores/GPUs/nodes should be used for running a check. The number of cores and GPUs serves as an upper limit; the actual count depends on the specific configuration of cores, GPUs, and sockets within the node, as well as the specific test being carried out.

tag name description 1_core using 1 CPU core 1 GPU 2_cores using 2 CPU cores and 1 GPU 4_cores using 4 CPU cores and 1 GPU 1cpn_2nodes using 1 CPU core per node, 1 GPU per node, and 2 nodes 1cpn_4nodes using 1 CPU core per node, 1 GPU per node, and 4 nodes 1_8_node using 1/8th of a node (12.5% of available cores/GPUs, 1 at minimum) 1_4_node using a quarter of a node (25% of available cores/GPUs, 1 at minimum) 1_2_node using half of a node (50% of available cores/GPUs, 1 at minimum) 1_node using a full node (all available cores/GPUs) 2_nodes using 2 full nodes 4_nodes using 4 full nodes 8_nodes using 8 full nodes 16_nodes using 16 full nodes"},{"location":"test-suite/usage/#using-multiple-tags","title":"Using multiple tags","text":"

To filter tests using multiple tags, you can:

"},{"location":"test-suite/usage/#example-commands","title":"Example commands","text":"

Running all GROMACS tests on 4 cores on the cpu partition

reframe --run --system example:cpu --name GROMACS --tag 4_cores --performance-report\n

List all checks for TensorFlow 2.11 using a single node

reframe --list --name %module_name=TensorFlow/2.11 --tag 1_node\n

Dry run of TensorFlow CI checks on a quarter (1/4) of a node (on all system partitions)

reframe --dry-run --name 'TensorFlow.*CUDA' --tag 1_4_node --tag CI\n
"},{"location":"test-suite/usage/#overriding-test-parameters-advanced","title":"Overriding test parameters (advanced)","text":"

You can override test parameters using the --setvar option (or -S).

This can be done either globally (for all tests), or only for specific tests (which is recommended when using --setvar).

For example, to run all GROMACS checks with a specific GROMACS module, you can use:

reframe --setvar GROMACS_EESSI.modules=GROMACS/2023.1-foss-2022a ...\n

Warning

We do not recommend using --setvar, since it is quite easy to make unintended changes to test parameters this way that can result in broken checks.

You should try filtering tests using the --name or --tag options instead.

"},{"location":"test-suite/writing-portable-tests/","title":"Writing portable tests","text":"

This page is a tutorial on how to write a new test for the EESSI test suite.

If you already know how to write regular ReFrame tests, we suggest you read the High-level overview and Test requirements sections, then skip ahead to Step 3: implementing as a portable ReFrame test.

"},{"location":"test-suite/writing-portable-tests/#high-level-overview","title":"High-level overview","text":"

In this tutorial, you will learn how to write a test for the EESSI test suite. It is important to realize in which context the test suite will be run. Roughly speaking, there are three uses:

The test suite contains a combination of real-life use cases for end-user scientific software (e.g. tests for GROMACS, TensorFlow, CP2K, OpenFOAM, etc) and low level tests (e.g. OSU Microbenchmarks).

The tests in the EESSI test suite are developed using the ReFrame HPC testing framework. Typically, ReFrame tests hardcode system specific information (core counts, performance references, etc) in the test definition. The EESSI test suite aims to be portable, and implements a mixin class that invokes a series of standard hooks to replace information that is typically hardcoded. All system-specific information is then limited to the ReFrame configuration file. As an example: rather than hardcoding that a test should run with 128 tasks (i.e. because a system has 128 core nodes), the EESSI test suite has a hook that can define a test should be run on a \"single, full node\". The hook queries the ReFrame configuration file for the amount of cores per node, and specifies this number as the corresponding amount of tasks. Thus, on a 64-core node, this test would run with 64 tasks, while on a 128-core node, it would run 128 tasks.

"},{"location":"test-suite/writing-portable-tests/#test-requirements","title":"Test requirements","text":"

To be useful in the aforementioned scenarios, tests need to satisfy a number of requirements.

"},{"location":"test-suite/writing-portable-tests/#step-by-step-tutorial-for-writing-a-portable-reframe-test","title":"Step-by-step tutorial for writing a portable ReFrame test","text":"

In the next section, we will show how to write a test for the EESSI test suite by means of an example: we will create a test for mpi4py that executes an MPI_REDUCE call to sum the ranks of all processes. If you're unfamiliar with MPI or mpi4py, or want to see the exact code this test will run, you may want to read Background of the mpi4py test before proceeding. The complete test developed in this tutorial can be found in the tutorials/mpi4py directory in of the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#step-1-writing-job-scripts-to-execute-tests","title":"Step 1: writing job scripts to execute tests","text":"

Although not strictly needed for the implementation of a ReFrame test, it is useful to try and write a job script for how you would want to run this test on a given system. For example, on a system with 128-core nodes, managed by SLURM, we might have the following job scripts to execute the mpi4py_reduce.py code.

To run on 2 cores:

#!/bin/bash\n#SBATCH --ntasks=2  # 2 tasks, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 2 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n
To run on one full node:
#!/bin/bash\n#SBATCH --ntasks=128  # min. 2 tasks in total, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --ntasks-per-node=128\n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 128 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n
To run on two full nodes
#!/bin/bash\n#SBATCH --ntasks=256 # min. 2 tasks in total, since 2 processes is the minimal size on which I can do a reduction\n#SBATCH --ntasks-per-node=128 \n#SBATCH --cpus-per-task=1  # 1 core per task (this is a pure multiprocessing test, each process only uses 1 thread)\n#SBATCH --time=5:00  # This test is very fast. It shouldn't need more than 5 minutes\nsource /cvmfs/software.eessi.io/versions/2023.06/init/bash\nmodule load mpi4py/3.1.5-gompi-2023b\nmpirun -np 256 python3 mpi4py_reduce.py --n_iter 1000 --n_warmup 100\n

Clearly, such job scripts are not very portable: these only work on SLURM systems, we had to duplicate a lot to run on different scales, we would have to duplicate even more if we wanted to test multiple mpi4py versions, etc. This is where ReFrame comes in: it has support for different schedulers, and allows one to easily specify a range of parameters (such as the number of tasks in the above example) to create tests for.

"},{"location":"test-suite/writing-portable-tests/#step-2-implementing-as-a-non-portable-reframe-test","title":"Step 2: implementing as a non-portable ReFrame test","text":"

First, let us implement this as a non-portable test in ReFrame. This code can be found under tutorials/mpi4py/mpi4py_system_specific.py in the EESSI test suite repository. We will not elaborate on how to write ReFrame tests, it is well-documented in the official ReFrame documentation. We have put extensive comments in the test definition below, to make it easier to understand when you have limited familiarity with ReFrame. Whenever the variables below have a specific meaning in ReFrame, we referenced the official documentation:

\"\"\"\nThis module tests mpi4py's MPI_Reduce call\n\"\"\"\n\nimport reframe as rfm\nimport reframe.utility.sanity as sn\n\n# added only to make the linter happy\nfrom reframe.core.builtins import variable, parameter, run_after, performance_function, sanity_function\n\n\n# This python decorator indicates to ReFrame that this class defines a test\n# Our class inherits from rfm.RunOnlyRegressionTest, since this test does not have a compilation stage\n# https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RunOnlyRegressionTest\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest):\n    # Programming environments are only relevant for tests that compile something\n    # Since we are testing existing modules, we typically don't compile anything and simply define\n    # 'default' as the valid programming environment\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.valid_prog_environs\n    valid_prog_environs = ['default']\n\n    # Typically, we list here the name of our cluster as it is specified in our ReFrame configuration file\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.valid_systems\n    valid_systems = ['snellius']\n\n    # ReFrame will generate a test for each module\n    # NOTE: each parameter adds a new dimension to the parametrization space. \n    # (EG 4 parameters with (3,3,2,2) possible values will result in 36 tests).\n    # Be mindful of how many parameters you add to avoid the number of tests generated being excessive.\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.parameter\n    module_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\n\n    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n\n    # Our script has two arguments, --n_iter and --n_warmup. By defining these as ReFrame variables, we can\n    # enable the end-user to overwrite their value on the command line when invoking ReFrame.\n    # Note that we don't typically expose ALL variables, especially if a script has many - we expose\n    # only those that we think an end-user might want to overwrite\n    # Number of iterations to run (more iterations takes longer, but results in more accurate timing)\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.variable\n    n_iterations = variable(int, value=1000)\n\n    # Similar for the number of warmup iterations\n    n_warmup = variable(int, value=100)\n\n    # Define which executable to run\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.executable\n    executable = 'python3'\n\n    # Define which options to pass to the executable\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.executable_opts\n    executable_opts = ['mpi4py_reduce.py', '--n_iter', f'{n_iterations}', '--n_warmup', f'{n_warmup}']\n\n    # Define a time limit for the scheduler running this test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.time_limit\n    time_limit = '5m00s'\n\n    # Using this decorator, we tell ReFrame to run this AFTER the init step of the test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.run_after\n    # See https://reframe-hpc.readthedocs.io/en/stable/pipeline.html for all steps in the pipeline\n    # that reframe uses to execute tests. Note that after the init step, ReFrame has generated test instances for each\n    # of the combinations of parameters above. Thus, now, there are 6 instances (2 module names * 3 scales). Here,\n    # we set the modules to load equal to one of the module names\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.modules\n    @run_after('init')\n    def set_modules(self):\n        self.modules = [self.module_name]\n\n    # Similar for the scale, we now set the number of tasks equal to the scale for this instance\n    @run_after('init')\n    def define_task_count(self):\n        # Set the number of tasks, self.scale is now a single number out of the parameter list\n        # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks\n        self.num_tasks = self.scale\n        # Set the number of tasks per node to either be equal to the number of tasks, but at most 128,\n        # since we have 128-core nodes\n        # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks_per_node\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n\n    # Now, we check if the pattern 'Sum of all ranks: X' with X the correct sum for the amount of ranks is found\n    # in the standard output:\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.sanity_function\n    @sanity_function\n    def validate(self):\n        # Sum of 0, ..., N-1 is (N * (N-1) / 2)\n        sum_of_ranks = round(self.scale * ((self.scale - 1) / 2))\n        # https://reframe-hpc.readthedocs.io/en/stable/deferrable_functions_reference.html#reframe.utility.sanity.assert_found\n        return sn.assert_found(r'Sum of all ranks: %s' % sum_of_ranks, self.stdout)\n\n    # Now, we define a pattern to extract a number that reflects the performance of this test\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.builtins.performance_function\n    @performance_function('s')\n    def time(self):\n        # https://reframe-hpc.readthedocs.io/en/stable/deferrable_functions_reference.html#reframe.utility.sanity.extractsingle\n        return sn.extractsingle(r'^Time elapsed:\\s+(?P<perf>\\S+)', self.stdout, 'perf', float)\n

This single test class will generate 6 test instances: tests with 2, 128 and 256 tasks for each of the two modules, respectively. It will check the sum of ranks produced at the end in the output, which is how ReFrame will validate that the test ran correctly. Finally, it will also print the performance number that was extracted by the performance_function.

This test works, but is not very portable. If we move to a system with 192 cores per node, the current scale parameter is a bit awkward. The test would still run, but we wouldn't have a test instance that just tests this on a full (single) node or two full nodes. Furthermore, if we add a new mpi4py module in EESSI, we would have to alter the test to add the name to the list, since the module names are hardcoded in this test.

"},{"location":"test-suite/writing-portable-tests/#as-portable-reframe-test","title":"Step 3: implementing as a portable ReFrame test","text":"

In step 2, there were several system-specific items in the test. In this section, we will show how we use inheritance from the EESSI_Mixin class to avoid hard-coding system specific information. The full final test can be found under tutorials/mpi4py/mpi4py_portable_mixin.py in the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#how-eessi_mixin-works","title":"How EESSI_Mixin works","text":"

The EESSI_Mixin class provides standardized functionality that should be useful to all tests in the EESSI test-suite. One of its key functions is to make sure tests dynamically try to determine sensible values for the things that were system specific in Step 2. For example, instead of hard coding a task count, the test inheriting from EESSI_Mixin will determine this dynamically based on the amount of available cores per node, and a declaration from the inheriting test class about how you want to instantiate tasks.

To illustrate this, suppose you want to launch your test with one task per CPU core. In that case, your test (that inherits from EESSI_Mixin) only has to declare

compute_unit = COMPUTE_UNIT[CPU]\n

The EESSI_Mixin class then takes care of querying the ReFrame config file for the cpu topology of the node, and setting the correct number of tasks per node.

Another feature is that it sets defaults for a few items, such as the valid_prog_environs = ['default']. These will likely be the same for most tests in the EESSI test suite, and when they do need to be different, one can easily overwrite them in the child class.

Most of the functionality in the EESSI_Mixin class require certain class attributes (such as the compute_unit above) to be set by the child class, so that the EESSI_Mixin class can use those as input. It is important that these attributes are set before the stage in which the EESSI_Mixin class needs them (see the stages of the ReFrame regression pipeline). To support test developers, the EESSI_Mixin class checks if these attributes are set, and gives verbose feedback in case any attributes are missing.

"},{"location":"test-suite/writing-portable-tests/#inheriting-from-eessi_mixin","title":"Inheriting from EESSI_Mixin","text":"

The first step is to actually inherit from the EESSI_Mixin class:

from eessi.testsuite.eessi_mixin import EESSI_Mixin\n...\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest, EESSI_Mixin):\n
"},{"location":"test-suite/writing-portable-tests/#removing-hard-coded-test-scales","title":"Removing hard-coded test scales","text":"

First, we remove

    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n
from the test. The EESSI_Mixin class will define the default set of scales on which this test will be run as
from eessi.testsuite.constants import SCALES\n...\n    scale = parameter(SCALES.keys())\n

This ensures the test will run a test case for each of the default scales, as defined by the SCALES constant.

If, and only if, your test can not run on all of those scales should you overwrite this parameter in your child class. For example, if you have a test that does not support running on multiple nodes, you could define a filtering function outside of the class

def filter_scales():\n    return [\n        k for (k,v) in SCALES.items()\n        if v['num_nodes'] == 1\n    ]\n
and then in the class body overwrite the scale parameter with a subset of items from the SCALES constant:
    scale = parameter(filter_scales())\n

Next, we also remove

   @run_after('init')\n    def define_task_count(self):\n        self.num_tasks = self.scale\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n

as num_tasks and and num_tasks_per_node will be set by the assign_tasks_per_compute_unit hook, which is invoked by the EESSI_Mixin class.

Instead, we only set the compute_unit. The number of launched tasks will be equal to the number of compute units. E.g.

    compute_unit = COMPUTE_UNIT[CPU]\n
will launch one task per (physical) CPU core. Other options are COMPUTE_UNIT[HWTHREAD] (one task per hardware thread), COMPUTE_UNIT[NUMA_NODE] (one task per numa node), COMPUTE_UNIT[CPU_SOCKET] (one task per CPU socket), COMPUTE_UNIT[GPU] (one task per GPU) and COMPUTE_UNIT[NODE] (one task per node). Check the COMPUTE_UNIT constant for the full list of valid compute units. The number of cores per task will automatically be set based on this as the ratio of the number of cores in a node to the number of tasks per node (rounded down). Additionally, the EESSI_Mixin class will set the OMP_NUM_THREADS environment variable equal to the number of cores per task.

Note

compute_unit needs to be set before (or in) ReFrame's setup phase. For the different phases of the pipeline, please see the documentation on how ReFrame executes tests.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-module-names","title":"Replacing hard-coded module names","text":"

Instead of hard-coding a module name, we parameterize over all module names that match a certain regular expression.

from eessi.testsuite.utils import find_modules\n...\n    module_name = parameter(find_modules('mpi4py'))\n

This parameter generates all module names available on the current system matching the expression, and each test instance will load the respective module before running the test.

Furthermore, we remove the hook that sets self.module:

@run_after('init')\ndef set_modules(self):\n    self.modules = [self.module_name]\n
This is now taken care of by the EESSI_Mixin class.

Note

module_name needs to be set before (or in) ReFrame's init phase

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-system-names-and-programming-environments","title":"Replacing hard-coded system names and programming environments","text":"

First, we remove the hard-coded system name and programming environment. I.e. we remove

    valid_prog_environs = ['default']\n    valid_systems = ['snellius']\n
The EESSI_Mixin class sets valid_prog_environs = ['default'] by default, so that is no longer needed in the child class (but it can be overwritten if needed). The valid_systems is instead replaced by a declaration of what type of device type is needed. We'll create an mpi4py test that runs on CPUs only:
    device_type = DEVICE_TYPES[CPU]\n
but note if we would have wanted to also generate test instances to test GPU <=> GPU communication, we could have defined this as a parameter:
    device_type = parameter([DEVICE_TYPES[CPU], DEVICE_TYPES[GPU]])\n

The device type that is set will be used by the filter_valid_systems_by_device_type hook to check in the ReFrame configuration file which of the current partitions contain the relevant device. Typically, we don't set the DEVICE_TYPES[CPU] on a GPU partition in the ReFrame configuration, so that we skip all CPU-only tests on GPU nodes. Check the DEVICE_TYPES constant for the full list of valid compute units.

EESSI_Mixin also filters based on the supported scales, which can again be configured per partition in the ReFrame configuration file. This can e.g. be used to avoid running large-scale tests on partitions that don't have enough nodes to run them.

Note

device_type needs to be set before (or in) ReFrame's init phase

"},{"location":"test-suite/writing-portable-tests/#requesting-sufficient-ram-memory","title":"Requesting sufficient RAM memory","text":"

To make sure you get an allocation with sufficient memory, your test should declare how much memory per node it needs by defining a required_mem_per_node function in your test class that returns the required memory per node (in MiB). Note that the amount of required memory generally depends on the amount of tasks that are launched per node (self.num_tasks_per_node).

Our mpi4py test takes around 200 MB when running with a single task, plus about 70 MB for every additional task. We round this up a little so that we can be sure the test won't run out of memory if memory consumption is slightly different on a different system. Thus, we define:

def required_mem_per_node(self):\n    return self.num_tasks_per_node * 100 + 250\n

While rounding up is advisable, do keep your estimate realistic. Too high a memory request will mean the test will get skipped on systems that cannot satisfy that memory request. Most HPC systems have at least 1 GB per core, and most laptop/desktops have at least 8 GB total. Designing a test so that it fits within those memory constraints will ensure it can be run almost anywhere.

Note

The easiest way to get the memory consumption of your test at various task counts is to execute it on a system which runs jobs in cgroups, define measure_memory_usage = True in your class body, and make the required_mem_per_node function return a constant amount of memory equal to the available memory per node on your test system. This will cause the EESSI_Mixin class to read out the maximum memory usage of the cgroup (on the head node of your allocation, in case of multi-node tests) and report it as a performance number.

"},{"location":"test-suite/writing-portable-tests/#process-binding","title":"Process binding","text":"

The EESSI_Mixin class binds processes to their respective number of cores automatically using the hooks.set_compact_process_binding hook. E.g. for a pure MPI test like mpi4py, each task will be bound to a single core. For hybrid tests that do both multiprocessing and multithreading, tasks are bound to a sequential number of cores. E.g. on a node with 128 cores and a hybrid test with 64 tasks and 2 threads per task, the first task will be bound to core 0 and 1, second task to core 2 and 3, etc. To override this behaviour, one would have to overwrite the

@run_after('setup')\ndef assign_tasks_per_compute_unit(self):\n    ...\n
function. Note that this function also calls other hooks (such as hooks.assign_task_per_compute_unit) that you probably still want to invoke. Check the EESSI_Mixin class definition to see which hooks you still want to call.

"},{"location":"test-suite/writing-portable-tests/#ci-tag","title":"CI Tag","text":"

As mentioned in the Test requirements, there should be at least one light-weight (short, low-core, low-memory) test case, which should be marked with the CI tag. The EESSI_Mixin class will automatically add the CI tag if both bench_name (the current variant) and bench_name_ci (the CI variant) are defined. The mpi4py test contains only one test case (which is very light-weight). In this case, it is sufficient to set both to the same name in the class body:

bench_name = 'mpi4pi'\nbench_name_ci = 'mpi4pi'\n

Suppose that our test has 2 variants, of which only 'variant1' should be marked CI. In that case, we can define bench_name as a parameter:

    bench_name = parameter(['variant1', 'variant2'])\n    bench_name_ci = 'variant1'\n
Next, we can define a hook that does different things depending on the variant, for example:
@run_after('init')\ndef do_something(self):\n    if self.bench_name == 'variant1':\n        do_this()\n    elif self.bench_name == 'variant2':\n        do_that()\n

"},{"location":"test-suite/writing-portable-tests/#thread-binding-optional","title":"Thread binding (optional)","text":"

Thread binding is not done by default, but can be done by invoking the hooks.set_compact_thread_binding hook:

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_thread_binding(self)\n

"},{"location":"test-suite/writing-portable-tests/#skipping-test-instances","title":"Skipping test instances when required (optional)","text":"

Preferably, we prevent test instances from being generated (i.e. before ReFrame's setup phase) if we know that they cannot run on a certain system. However, sometimes we need information on the nodes that will run it, which is only available after the setup phase. That is the case for anything where we need information from e.g. the reframe.core.pipeline.RegressionTest.current_partition.

For example, we might know that a test only scales to around 300 tasks, and above that, execution time increases rapidly. In that case, we'd want to skip any test instance that results in a larger amount of tasks, but we only know this after assign_tasks_per_compute_unit has been called (which is done by EESSI_Mixin in after the setup stage). For example, the 2_nodes scale would run fine on systems with 128 cores per node, but would exceed the task limit of 300 on systems with 192 cores per node.

We can skip any generated test cases using the skip_if function. For example, to skip the test if the total task count exceeds 300, we'd need to call skip_if after the setup stage (so that self.num_tasks is already set):

@run_after('setup')\n    hooks.assign_tasks_per_compute_unit(test=self, compute_unit=COMPUTE_UNIT[CPU])\n\n    max_tasks = 300\n    self.skip_if(self.num_tasks > max_tasks,\n                 f'Skipping test: more than {max_tasks} tasks are requested ({self.num_tasks})')\n

The mpi4py test scales up to a very high core count, but if we were to set it for the sake of this example, one would see:

[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=16_nodes /38aea144 @snellius:genoa+default\n[     SKIP ] ( 1/13) Skipping test: more than 300 tasks are requested (3072)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=8_nodes /bfc4d3d4 @snellius:genoa+default\n[     SKIP ] ( 2/13) Skipping test: more than 300 tasks are requested (1536)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_nodes /8de369bc @snellius:genoa+default\n[     SKIP ] ( 3/13) Skipping test: more than 300 tasks are requested (768)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_nodes /364146ba @snellius:genoa+default\n[     SKIP ] ( 4/13) Skipping test: more than 300 tasks are requested (384)\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_node /8225edb3 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_2_node /4acf483a @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_4_node /fc3d689b @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_8_node /73046a73 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_4nodes /f08712a2 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_2nodes /23cd550b @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_cores /bb8e1349 @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_cores /4c0c7c9e @snellius:genoa+default\n[ RUN      ] EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_core /aa83ba9e @snellius:genoa+default\n\n...\n
on a system with 192 cores per node. I.e. any test of 2 nodes (384 cores) or above would be skipped because it exceeds our max task count.

"},{"location":"test-suite/writing-portable-tests/#setting-a-time-limit-optional","title":"Setting a time limit (optional)","text":"

By default, the EESSI_Mixin class sets a time limit for jobs of 1 hour. You can overwrite this in your child class:

time_limit = '5m00s'\n
For the appropriate string formatting, please check the ReFrame documentation on time_limit. We already had this in the non-portable version of our mpi4py test and will keep it in the portable version: since this is a very quick test, specifying a lower time limit will help in getting the jobs scheduled more quickly.

Note that for the test to be portable, the time limit should be set such that it is sufficient regardless of node architecture and scale. It is pretty hard to guarantee this with a single, fixed time limit, without knowing upfront what architecture the test will be run on, and thus how many tasks will be launched. For strong scaling tests, you might want a higher time limit for low task counts, whereas for weak scaling tests you might want a higher time limit for higher task counts. To do so, you can consider setting the time limit after setup, and making it dependent on the task count.

Suppose we have a weak scaling test that takes 5 minutes with a single task, and 60 minutes with 10k tasks. We can set a time limit based on linear interpolation between those task counts:

@run_after('setup')\ndef set_time_limit(self):\n    # linearly interpolate between the single and 10k task count\n    minutes = 5 + self.num_tasks * ((60-5) / 10000)\n    self.time_limit = f'{minutes}m00s'\n
Note that this is typically an overestimate of how long the test will take for intermediate task counts, but that's ok: we'd rather overestimate than underestimate the runtime.

To be even safer, one could consider combining this with logic to skip tests if the 10k task count is exceeded.

"},{"location":"test-suite/writing-portable-tests/#summary","title":"Summary","text":"

To make the test portable, we added additional imports:

from eessi.testsuite.eessi_mixin import EESSI_Mixin\nfrom eessi.testsuite.constants import COMPUTE_UNIT, DEVICE_TYPES, CPU\nfrom eessi.testsuite.utils import find_modules\n

Made sure the test inherits from EESSI_Mixin:

@rfm.simple_test\nclass EESSI_MPI4PY(rfm.runOnlyRegressionTest, EESSI_Mixin):\n

Removed the following from the class body:

valid_prog_environs = ['default']\nvalid_systems = ['snellius']\n\nmodule_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\nscale = parameter([2, 128, 256])\n

Added the following to the class body:

device_type = DEVICE_TYPES[CPU]\ncompute_unit = COMPUTE_UNIT[CPU]\n\nmodule_name = parameter(find_modules('mpi4py'))\n

Defined the class method:

def required_mem_per_node(self):\n    return self.num_tasks_per_node * 100 + 250\n

Removed the ReFrame pipeline hook that sets self.modules:

@run_after('init')\ndef set_modules(self):\n     self.modules = [self.module_name]\n

Removed the ReFrame pipeline hook that sets the number of tasks and number of tasks per node:

@run_after('init')\ndef define_task_count(self):\n    # Set the number of tasks, self.scale is now a single number out of the parameter list\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks\n    self.num_tasks = self.scale\n    # Set the number of tasks per node to either be equal to the number of tasks, but at most 128,\n    # since we have 128-core nodes\n    # https://reframe-hpc.readthedocs.io/en/stable/regression_test_api.html#reframe.core.pipeline.RegressionTest.num_tasks_per_node\n    self.num_tasks_per_node = min(self.num_tasks, 128)\n

The final test is thus:

\"\"\"\nThis module tests mpi4py's MPI_Reduce call\n\"\"\"\n\nimport reframe as rfm\nimport reframe.utility.sanity as sn\n\nfrom reframe.core.builtins import variable, parameter, run_after, performance_function, sanity_function\n\nfrom eessi.testsuite.eessi_mixin import EESSI_Mixin\nfrom eessi.testsuite.constants import COMPUTE_UNIT, DEVICE_TYPES, CPU\nfrom eessi.testsuite.utils import find_modules\n\n@rfm.simple_test\nclass EESSI_MPI4PY(rfm.RunOnlyRegressionTest, EESSI_Mixin):\n    device_type = DEVICE_TYPES[CPU]\n    compute_unit = COMPUTE_UNIT[CPU]\n\n    module_name = parameter(find_modules('mpi4py'))\n\n    n_iterations = variable(int, value=1000)\n    n_warmup = variable(int, value=100)\n\n    executable = 'python3'\n    executable_opts = ['mpi4py_reduce.py', '--n_iter', f'{n_iterations}', '--n_warmup', f'{n_warmup}']\n\n    time_limit = '5m00s'\n\n    def required_mem_per_node(self):\n        return self.num_tasks_per_node * 100 + 250\n\n    @sanity_function\n    def validate(self):\n        sum_of_ranks = round(self.num_tasks * ((self.num_tasks - 1) / 2))\n        return sn.assert_found(r'Sum of all ranks: %s' % sum_of_ranks, self.stdout)\n\n    @performance_function('s')\n    def time(self):\n        return sn.extractsingle(r'^Time elapsed:\\s+(?P<perf>\\S+)', self.stdout, 'perf', float)\n

Note that with only 34 lines of code, this is now very quick and easy to write, because of the default behaviour from the EESSI_Mixin class.

"},{"location":"test-suite/writing-portable-tests/#background-of-mpi4py-test","title":"Background of the mpi4py test","text":"

To understand what this test does, you need to know some basics of MPI. If you know about MPI, you can skip this section.

The MPI standard defines how to communicate between multiple processes that work on a common computational task. Each process that is part of the computational task gets a unique identifier (0 to N-1 for N processes), the MPI rank, which can e.g. be used to distribute a workload. The MPI standard defines communication between two given processes (so-called point-to-point communication), but also between a set of N processes (so-called collective communication).

An example of such a collective operation is the MPI_REDUCE call. It reduces data elements from multiple processes with a certain operation, e.g. it takes the sum of all elements or multiplication of all elements.

"},{"location":"test-suite/writing-portable-tests/#the-mpi4py-test","title":"The mpi4py test","text":"

In this example, we will implement a test that does an MPI_Reduce on the rank, using the MPI.SUM operation. This makes it easy to validate the result, as we know that for N processes, the theoretical sum of all ranks (0, 1, ... N-1) is (N * (N-1) / 2).

Our initial code is a python script mpi4py_reduce.py, which can be found in tutorials/mpi4py/src/mpi4py_reduce.py in the EESSI test suite repository:

#!/usr/bin/env python\n\"\"\"\nMPI_Reduce on MPI rank. This should result in a total of (size * (size - 1) / 2),\nwhere size is the total number of ranks.\nPrints the total number of ranks, the sum of all ranks, and the time elapsed for the reduction.\"\n\"\"\"\n\nimport argparse\nimport time\n\nfrom mpi4py import MPI\n\nparser = argparse.ArgumentParser(description='mpi4py reduction benchmark',\n                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)\nparser.add_argument('--n_warmup', type=int, default=100,\n                    help='Number of warmup iterations')\nparser.add_argument('--n_iter', type=int, default=1000,\n                    help='Number of benchmark iterations')\nargs = parser.parse_args()\n\nn_warmup = args.n_warmup\nn_iter = args.n_iter\n\nsize = MPI.COMM_WORLD.Get_size()\nrank = MPI.COMM_WORLD.Get_rank()\nname = MPI.Get_processor_name()\n\n# Warmup\nt0 = time.time()\nfor i in range(n_warmup):\n    total = MPI.COMM_WORLD.reduce(rank, op=MPI.SUM)\n\n# Actual reduction, multiple iterations for accuracy of timing\nt1 = time.time()\nfor i in range(n_iter):\n    total = MPI.COMM_WORLD.reduce(rank, op=MPI.SUM)\nt2 = time.time()\ntotal_time = (t2 - t1) / n_iter\n\nif rank == 0:\n    print(f\"Total ranks: {size}\")\n    print(f\"Sum of all ranks: {total}\")  # Should be (size * (size-1) / 2)\n    print(f\"Time elapsed: {total_time:.3e}\")\n

Assuming we have mpi4py available, we could run this manually using

$ mpirun -np 4 python3 mpi4py_reduce.py\nTotal ranks: 4\nSum of all ranks: 6\nTime elapsed: 3.609e-06\n

This started 4 processes, with ranks 0, 1, 2, 3, and then summed all the ranks (0+1+2+3=6) on the process with rank 0, which finally printed all this output. The whole reduction operation is performed n_iter times, so that we get a more reproducible timing.

"},{"location":"test-suite/writing-portable-tests/#as-portable-reframe-test-legacy","title":"Step 3: implementing as a portable ReFrame test without using EESSI_Mixin","text":"

The approach using inheritance from the EESSI_Mixin class, described above, is strongly preferred and recommended. There might be certain tests that do not fit the standardized approach of EESSI_Mixin, but usually that will be solvable by overwriting hooks set by EESSI_Mixin in the inheriting class. In the rare case that your test is so exotic that even this doesn't provide a sensible solution, you can still invoke the hooks used by EESSI_Mixin manually. Note that this used to be the default way of writing tests for the EESSI test suite.

In step 2, there were several system-specific items in the test. In this section, we will show how we use the EESSI hooks to avoid hard-coding system specific information. We do this by replacing the system-specific parts of the test from Step 2 bit by bit. The full final test can be found under tutorials/mpi4py/mpi4py_portable_legacy.py in the EESSI test suite repository.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-test-scales-mandatory","title":"Replacing hard-coded test scales (mandatory)","text":"

We replace the hard-coded

    # ReFrame will generate a test for each scale\n    scale = parameter([2, 128, 256])\n

by

from eessi.testsuite.constants import SCALES\n...\n    # ReFrame will generate a test for each scale\n    scale = parameter(SCALES.keys())\n

the SCALES constant contains a set of default scales at which we run all tests. For our mpi4py example, that is sufficient.

Note

It might be that particular tests do not make sense at certain scales. An example is code that only has multithreading, but no multiprocessing support, and is thus only able to run on a single node. In that case, we filter the set of SCALES down to only those where num_nodes = 1, and parameterize the test across those scales:

from eessi.testsuite.constants import SCALES\ndef get_singlenode_scales():\n    \"\"\"\n    Filtering function for single node tests\n    \"\"\"\n    return [\n        k for (k, v) in SCALES.items()\n        if v['num_nodes'] == 1\n    ]\n   ...\n   scale = parameter(get_singlenode_scales())\n

We also replace

    @run_after('init')\n    def define_task_count(self):\n        self.num_tasks = self.scale\n        self.num_tasks_per_node = min(self.num_tasks, 128)\n

by

from eessi.testsuite import hooks\nfrom eessi.testsuite.constants import SCALES, COMPUTE_UNIT, CPU\n    ...\n    @run_after('init')\n    def run_after_init(self):\n        hooks.set_tag_scale(self)\n\n    @run_after('setup')\n    def set_num_tasks_per_node(self):\n        \"\"\" Setting number of tasks per node and cpus per task in this function. This function sets num_cpus_per_task\n        for 1 node and 2 node options where the request is for full nodes.\"\"\"\n        hooks.assign_tasks_per_compute_unit(self, COMPUTE_UNIT[CPU])\n

The first hook (set_tag_scale) sets a number of custom attributes for the current test, based on the scale (self.num_nodes, self.default_num_cpus_per_node, self.default_num_gpus_per_node, self.node_part). These are not used by ReFrame, but can be used by later hooks from the EESSI test suite. It also sets a ReFrame scale tag for convenience. These scale tags are useful for quick test selection, e.g. by running ReFrame with --tag 1_node one would only run the tests generated for the scale 1_node. Calling this hook is mandatory for all tests, as it ensures standardization of tag names based on the scales.

The second hook, assign_tasks_per_compute_unit, is used to set the task count. This hook sets the self.num_tasks and self.num_tasks_per_node we hardcoded before. In addition, it sets the self.num_cpus_per_task. In this case, we call it with the COMPUTE_UNIT[CPU] argument, which means one task will be launched per (physical) CPU available. Thus, for the 1_node scale, this would run the mpi4py test with 128 tasks on a 128-core node, and with 192 tasks on a 192-core node. Check the code for other valid COMPUTE_UNIT's.

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-module-names-mandatory","title":"Replacing hard-coded module names (mandatory)","text":"

If we write an mpi4py test, we typically want to run this for all mpi4py modules that are available via our current $MODULEPATH. We do that by replacing

    module_name = parameter(['mpi4py/3.1.4-gompi-2023a', 'mpi4py/3.1.5-gompi-2023b'])\n

by using the find_modules utility function:

from eessi.testsuite.utils import find_modules\n...\n    module_name = parameter(find_modules('mpi4py'))\n

We also replace

    @run_after('init')\n    def set_modules(self):\n        self.modules = [self.module_name]\n

by

    @run_after('init')\n    def set_modules(self):\n        hooks.set_modules(self)\n

The set_modules hook assumes that self.module_name has been set, but has the added advantage that a user running the EESSI test suite can overwrite the modules to load from the command line when running ReFrame (see Overriding test parameters).

"},{"location":"test-suite/writing-portable-tests/#replacing-hard-coded-valid_systems-mandatory","title":"Replacing hard-coded valid_systems (mandatory)","text":"

The valid_systems attribute is a mandatory attribute to specify in a ReFrame test. However, we can set it to match any system:

valid_systems = [*]\n

Normally, valid_systems is used as a way of guaranteeing that a system has the necessary properties to run the test. For example, if we know that my_gpu_system has NVIDIA GPUs and I have a test written for NVIDIA GPUs, I would specify valid_systems['my_gpu_system'] for that test. This, however, is a surrogate for declaring what my test needs: I'm saying it needs my_gpu_system, while in fact I could make the more general statement 'this test needs NVIDIA GPUs'.

To keep the test system-agnostic we can declare what the test needs by using ReFrame's concept of partition features (a string) and/or extras (a key-value pair); see the ReFrame documentation on valid_systems. For example, a test could declare it needs the gpu feature. Such a test will only be created by ReFrame for partitions that declare (in the ReFrame configuration file) that they have the gpu feature.

Since features and extras are full text fields, we standardize those in the EESSI test suite in the eessi/testsuite/constants.py file. For example, tests that require an NVIDIA GPU could specify

from eessi.testsuite.constants import FEATURES, GPU, GPU_VENDOR, GPU_VENDORS, NVIDIA\n...\nvalid_systems = f'+{FEATURES[GPU]} %{GPU_VENDOR}={GPU_VENDORS[NVIDIA]}'\n

which makes sure that a test instance is only generated for partitions (as defined in the ReFrame configuration file) that specify that they have the corresponding feature and extras:

from eessi.testsuite.constants import FEATURES, GPU, GPU_VENDOR, GPU_VENDORS, NVIDIA\n...\n'features': [\n     FEATURES[GPU],\n],\n'extras': {\n    GPU_VENDOR: GPU_VENDORS[NVIDIA],\n},\n

In practice, one will rarely hard-code this valid_systems string. Instead, we have a hook filter_valid_systems_by_device_type. It does the above, and a bit more: it also checks if the module that the test is generated for is CUDA-enabled (in case of a test for NVIDIA GPUs), and only then will it generate a GPU-based test. Calling this hook is mandatory for all tests (even if just to declare they need a CPU to run).

Another aspect is that not all ReFrame partitions may be able to run tests of all of the standard SCALES. Each ReFrame partition must add the subset of SCALES it supports to its list of features. A test case can declare it needs a certain scale. For example, a test case using the 16_nodes scale needs a partition with at least 16 nodes. The filter_supported_scales hook then filters out all partitions that do not support running jobs on 16 nodes. Calling this hook is also mandatory for all tests.

There may be other hooks that facilitate valid system selection for your tests, but please check the code for a full list.

"},{"location":"test-suite/writing-portable-tests/#requesting-sufficient-memory-mandatory","title":"Requesting sufficient memory (mandatory)","text":"

When developing the test, we don't know how much memory the node will have on which it will run. However, we do know how much our application needs.

We can declare this need using the req_memory_per_node hook. This hook is mandatory for all tests. If you are on a system with a scheduler that runs jobs within a cgroup and where you can use mpirun or srun as the parallel launcher command in the ReFrame configuration, getting the memory consumption is easy. You can (temporarily) add a postrun_cmds the following to the class body of your test that extracts the maximum memory that was used within your cgroup. For cgroups v1, the syntax would be:

   # Temporarily define postrun_cmds to make it easy to find out memory usage\n    postrun_cmds = ['MAX_MEM_IN_BYTES=$(</sys/fs/cgroup/memory/$(</proc/self/cpuset)/../memory.max_usage_in_bytes)', 'echo \"MAX_MEM_IN_MIB=$(($MAX_MEM_IN_BYTES/1048576))\"']\n

For cgroups v2, the syntax would be:

   # Temporarily define postrun_cmds to make it easy to find out memory usage\n   postrun_cmds = ['MAX_MEM_IN_BYTES=$(</sys/fs/cgroup/$(</proc/self/cpuset)/../../../memory.peak)', 'echo \"MAX_MEM_IN_MIB=$(($MAX_MEM_IN_BYTES/1048576))\"']\n

And define an additional performance_function:

    @performance_function('MiB')\n    def max_mem_in_mib(self):\n        return sn.extractsingle(r'^MAX_MEM_IN_MIB=(?P<perf>\\S+)', self.stdout, 'perf', int)\n

This results in the following output on 192-core nodes (we've omitted some output for readability):

[----------] start processing checks\n[       OK ] ( 1/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=16_nodes /38aea144 @snellius:genoa+default\nP: max_mem_in_mib: 22018 MiB (r:0, l:None, u:None)\n[       OK ] ( 2/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=8_nodes /bfc4d3d4 @snellius:genoa+default\nP: max_mem_in_mib: 21845 MiB (r:0, l:None, u:None)\n[       OK ] ( 3/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_nodes /8de369bc @snellius:genoa+default\nP: max_mem_in_mib: 21873 MiB (r:0, l:None, u:None)\n[       OK ] ( 4/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_nodes /364146ba @snellius:genoa+default\nP: max_mem_in_mib: 21800 MiB (r:0, l:None, u:None)\n[       OK ] ( 5/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_node /8225edb3 @snellius:genoa+default\nP: max_mem_in_mib: 21666 MiB (r:0, l:None, u:None)\n[       OK ] ( 6/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_2_node /4acf483a @snellius:genoa+default\nP: max_mem_in_mib: 10768 MiB (r:0, l:None, u:None)\n[       OK ] ( 7/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_4_node /fc3d689b @snellius:genoa+default\nP: max_mem_in_mib: 5363 MiB (r:0, l:None, u:None)\n[       OK ] ( 8/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_8_node /73046a73 @snellius:genoa+default\nP: max_mem_in_mib: 2674 MiB (r:0, l:None, u:None)\n[       OK ] ( 9/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_4nodes /f08712a2 @snellius:genoa+default\nP: max_mem_in_mib: 210 MiB (r:0, l:None, u:None)\n[       OK ] (10/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1cpn_2nodes /23cd550b @snellius:genoa+default\nP: max_mem_in_mib: 209 MiB (r:0, l:None, u:None)\n[       OK ] (11/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=4_cores /bb8e1349 @snellius:genoa+default\nP: max_mem_in_mib: 753 MiB (r:0, l:None, u:None)\n[       OK ] (12/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=2_cores /4c0c7c9e @snellius:genoa+default\nP: max_mem_in_mib: 403 MiB (r:0, l:None, u:None)\n[       OK ] (13/13) EESSI_MPI4PY %module_name=mpi4py/3.1.5-gompi-2023b %scale=1_core /aa83ba9e @snellius:genoa+default\nP: max_mem_in_mib: 195 MiB (r:0, l:None, u:None)\n

If you are not on a system where your scheduler runs jobs in cgroups, you will have to figure out the memory consumption in another way (e.g. by checking memory usage in top while running the test).

We now have a pretty good idea how the memory per node scales: for our smallest process count (1 core), it's about 200 MiB per process, while for our largest process count (16 nodes, 16*192 processes), it's 22018 MiB per node (or about 115 MiB per process). If we wanted to do really well, we could define a linear function (with offset) and fit it through the data (and round up to be on the safe side, i.e. make sure there is enough memory). Then, we could call the hook like this:

@run_after('setup')\ndef request_mem(self):\n    mem_required = self.num_tasks_per_node * mem_slope + mem_intercept\n    hooks.req_memory_per_node(self, app_mem_req=mem_required)\n

In this case, however, the memory consumption per process is low enough that we don't have go through that effort, and generously request 256 MiB per task that is launched on a node. Thus, we call our hook using:

@run_after('setup')\ndef request_mem(self):\n    mem_required = self.num_tasks_per_node * 256\n    hooks.req_memory_per_node(self, app_mem_req=mem_required)\n
Note that requesting too high an amount of memory means the test will be skipped on nodes that cannot meet that requirement (even if they might have been able to run it without actually running out of memory). Requesting too little will risk nodes running out of memory while running the test. Note that many HPC systems have an amount memory of around 1-2 GB/core. It's good to ensure (if you can) that the memory requests for all valid SCALES for your test do not exceed the total amount of memory available on typical nodes.

"},{"location":"test-suite/writing-portable-tests/#requesting-taskprocessthread-binding-recommended","title":"Requesting task/process/thread binding (recommended)","text":"

Binding processes to a set of cores prevents the OS from migrating such processes to other cores. Especially on multi-socket systems, process migration can cause performance hits, especially if a process is moved to a CPU core on the other socket. Since this is controlled by the OS, and dependent on what other processes are running on the node, it may cause unpredictable performance: in some runs, processes might be migrated, while in others, they aren't.

Thus, it is typically better for reproducibility to bind processes to their respective set of cores. The set_compact_process_binding hook can do this for you:

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_process_binding(self)\n

For pure MPI codes, it will bind rank 0 to core 0, rank 1 to core 1, etc. For hybrid codes (MPI + OpenMP, or otherwise codes that do both multiprocessing and multithreading at the same time), it will bind to consecuitive sets of cores. E.g. if a single process uses 4 cores, it will bind rank 0 to cores 0-3, rank 1 to cores 4-7, etc.

To impose this binding, the hook sets environment variables that should be respected by the parallel launcher used to launch your application. Check the code to see which parallel launchers are currently supported. The use of this hook is optional, but generally recommended for all multiprocessing codes.

For multithreading codes, there set_compact_thread_binding hook is an equivalent hook that can do thread binding, if supported multithreading frameworks are used (e.g. Intel or GNU OpenMP, see the code for all supported frameworks):

@run_after('setup')\ndef set_binding(self):\n    hooks.set_compact_thread_binding(self)\n

The use of this hook is optional but recommended in most cases. Note that thread binding can sometimes cause unwanted behaviour: even if e.g. 8 cores are allocated to the process and 8 threads are launched, we have seen codes that bind all those threads to a single core (e.g. core 0) when core binding is enabled. Please verify that enabling core binding does not introduce any unwanted binding behaviour for your code.

"},{"location":"test-suite/writing-portable-tests/#defining-omp_num_threads-recommended","title":"Defining OMP_NUM_THREADS (recommended)","text":"

The set_omp_num_threads hook sets the $OMP_NUM_THREADS environment variable based on the number of cpus_per_task defined in the ReFrame test (which in turn is typically set by the assign_tasks_per_compute_unit hook). For OpenMP codes, it is generally recommended to call this hook, to ensure they launch the correct amount of threads.

"},{"location":"using_eessi/basic_commands/","title":"Basic commands","text":""},{"location":"using_eessi/basic_commands/#basic-commands-to-access-software-provided-via-eessi","title":"Basic commands to access software provided via EESSI","text":"

EESSI provides software through environment module files and Lmod.

To see which modules (and extensions) are available, run:

module avail\n

Below is a short excerpt of the output produced by module avail, showing 10 modules only.

   PyYAML/5.3-GCCcore-9.3.0\n   Qt5/5.14.1-GCCcore-9.3.0\n   Qt5/5.15.2-GCCcore-10.3.0                               (D)\n   QuantumESPRESSO/6.6-foss-2020a\n   R-bundle-Bioconductor/3.11-foss-2020a-R-4.0.0\n   R/4.0.0-foss-2020a\n   R/4.1.0-foss-2021a                                      (D)\n   re2c/1.3-GCCcore-9.3.0\n   re2c/2.1.1-GCCcore-10.3.0                               (D)\n   RStudio-Server/1.3.1093-foss-2020a-Java-11-R-4.0.0\n

Load modules with module load package/version, e.g., module load R/4.1.0-foss-2021a, and try out the software. See below for a short session

[EESSI 2023.06] $ module load R/4.1.0-foss-2021a\n[EESSI 2021.06] $ which R\n/cvmfs/software.eessi.io/versions/2021.12/software/linux/x86_64/intel/skylake_avx512/software/R/4.1.0-foss-2021a/bin/R\n[EESSI 2023.06] $ R --version\nR version 4.1.0 (2021-05-18) -- \"Camp Pontanezen\"\nCopyright (C) 2021 The R Foundation for Statistical Computing\nPlatform: x86_64-pc-linux-gnu (64-bit)\n\nR is free software and comes with ABSOLUTELY NO WARRANTY.\nYou are welcome to redistribute it under the terms of the\nGNU General Public License versions 2 or 3.\nFor more information about these matters see\nhttps://www.gnu.org/licenses/.\n
"},{"location":"using_eessi/building_on_eessi/","title":"Building software on top of EESSI","text":""},{"location":"using_eessi/building_on_eessi/#building-software-on-top-of-eessi-with-easybuild","title":"Building software on top of EESSI with EasyBuild","text":"

Building on top of EESSI with EasyBuild is relatively straightforward. One crucial feature is that EasyBuild supports building against operating system libraries that are not in a standard prefix (such as /usr/lib). This is required when building against EESSI, since all of the software in EESSI is built against the compatibility layer.

"},{"location":"using_eessi/building_on_eessi/#starting-the-eessi-software-environment","title":"Starting the EESSI software environment","text":"

Start your environment as described here

"},{"location":"using_eessi/building_on_eessi/#using-the-eessi-extend-module","title":"Using the EESSI-extend module","text":"

The EESSI-extend module facilitates building on top of EESSI using EasyBuild. It does a few key things:

  1. It configures EasyBuild to match how the rest of the EESSI software is built
  2. It configures EasyBuild to use a certain installation path (e.g. in your homedir), taking into account the hardware architecture you are building on
  3. It adds the relevant subdirectory from your installation path to your MODULEPATH, to make sure your newly installed modules are available
  4. It loads the EasyBuild module

The EESSI-extend module recognizes a few environment variables. To print an up-to-date list, check the module itself

module help EESSI-extend/2023.06-easybuild\n

The installation prefix is determined by EESSI-extend through the following logic:

  1. If $EESSI_CVMFS_INSTALL is set, software is installed in $EESSI_SOFTWARE_PATH. This variable shouldn't be used by users and would only be used by CVMFS administrators of the EESSI repository.
  2. If $EESSI_SITE_INSTALL is set, the EESSI site installation prefix ($EESSI_SITE_SOFTWARE_PATH) will be used. This is typically where sites hosting a system that has EESSI deployed would install additional software on top of EESSI and make it available to all their users.
  3. If $EESSI_PROJECT_INSTALL is set (and $EESSI_USER_INSTALL is not set), this prefix will be used. You should use this if you want to install additional software on top of EESSI that should also be usable by your project partners on the same system. For example, if you have a project space at /project/my_project that all your project partners can access, you could set export EESSI_PROJECT_INSTALL=/project/my_project/eessi. Make sure that this directory has the SGID permission set (chmod g+s $EESSI_PROJECT_INSTALL). This way, all the additional installations done with EESSI-extend will be put in that prefix, and will get the correct UNIX file permissions so that all your project partners can access it.
  4. If $EESSI_USER_INSTALL is set, this prefix will be used. You should use this if you want to install additional software on top of EESSI just for your own user. For example, you could set export EESSI_USER_INSTALL=$HOME/my/eessi/extend/prefix, and EESSI-extend will install all software in this prefix. Unix file permissions will be set such that these installations will be readable only to the user.

If none of the above apply, the default is a user installation in $HOME/EESSI (i.e. effectively the same as setting EESSI_USER_INSTALL=$HOME/EESSI).

Here, we assume you are just an end-user, not having set any of the above environment variables, and loading the EESSI-extend module with the default installation prefix:

module load EESSI-extend/2023.06-easybuild\n

Now, if we check the EasyBuild configuration

eb --show-config\nallow-loaded-modules (E) = EasyBuild, EESSI-extend\nbuildpath            (E) = /tmp/<user>/easybuild/build\ncontainerpath        (E) = /tmp/<user>/easybuild/containers\ndebug                (E) = True\nexperimental         (E) = True\nfilter-deps          (E) = Autoconf, Automake, Autotools, binutils, bzip2, DBus, flex, gettext, gperf, help2man, intltool, libreadline, libtool, M4, makeinfo, ncurses, util-linux, XZ, zlib\nfilter-env-vars      (E) = LD_LIBRARY_PATH\nhooks                (E) = /cvmfs/software.eessi.io/versions/2023.06/init/easybuild/eb_hooks.py\nignore-osdeps        (E) = True\ninstallpath          (E) = /home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2\nmodule-extensions    (E) = True\npackagepath          (E) = /tmp/<user>/easybuild/packages\nprefix               (E) = /tmp/<user>/easybuild\nread-only-installdir (E) = True\nrepositorypath       (E) = /tmp/<user>/easybuild/ebfiles_repo\nrobot-paths          (D) = /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/software/EasyBuild/4.9.4/easybuild/easyconfigs\nrpath                (E) = True\nsourcepath           (E) = /tmp/<user>/easybuild/sources\nsticky-bit           (E) = True\nsysroot              (E) = /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64\ntrace                (E) = True\numask                (E) = 077\nzip-logs             (E) = bzip2\n

Apart from the installpath, this is exactly how EasyBuild is configured when software is built for EESSI itself.

Note

Be aware that EESSI-extend will optimize the installation for your current hardware architecture, and the installpath also contains this architecture in it's directory structure (just like regular EESSI installations do). This means you should run the installation on the node type on which you also want to use the software. If you want the installation to be present for multiple node types, you can simply run it once on each type of node.

And, if we check our MODULEPATH, we see that the installpath that EasyBuild will use here is prepended

$ echo $MODULEPATH\n/home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all:...\n

"},{"location":"using_eessi/building_on_eessi/#building","title":"Building","text":"

Now, you are ready to build. For example, suppose you want to install netcdf4-python-1.6.5-foss-2023b.eb (which is not present at the time of writing), you run:

eb netcdf4-python-1.6.5-foss-2023b.eb\n

Note

If this netCDF for python module is available by the time you are trying, you can force a local rebuild by adding the --rebuild argument in order to experiment with building locally, or pick a different EasyConfig to build.

"},{"location":"using_eessi/building_on_eessi/#using-the-newly-built-module","title":"Using the newly built module","text":"

If the installation was done in the site installation path (i.e. EESSI_SITE_INSTALL was set, and things were installed in /cvmfs/software.eessi.io/host_injections/...), the modules are available by default to anyone who has initialized the EESSI software environment.

If the installation through EESSI-extend was done in a EESSI_PROJECT_INSTALL or EESSI_USER_INSTALL location, one has to make sure to load the EESSI-extend module before loading the module of interest, since this adds those prefixes to the MODULEPATH.

If we don't have the EESSI-extend module loaded, it will not find any modules installed in the EESSI_PROJECT_INSTALL or EESSI_USER_INSTALL locations:

$ module unload EESSI-extend\n$ module av netcdf4-python/1.6.5-foss-2023b\nNo module(s) or extension(s) found!\n

But, if we load EESSI-extend first:

$ module load EESSI-extend/2023.06-easybuild\n$ module av netcdf4-python/1.6.5-foss-2023b\n\n---- /home/<user>/eessi/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all ----\n   netcdf4-python/1.6.5-foss-2023b\n

This means you'll always need to load the EESSI-extend module if you want to use these modules (also, and particularly when you want to use them in a job script).

"},{"location":"using_eessi/building_on_eessi/#manually-building-software-op-top-of-eessi-without-easybuild","title":"Manually building software op top of EESSI (without EasyBuild)","text":"

Warning

We are working on a module file that should make building on top of EESSI (without using EasyBuild) more straightforward, particularly when using Autotools or CMake. Right now, it is a little convoluted and requires you to have a decent grasp of * What a runtime dynamic linker (ld-linux*.so) is and does * How to influence the behaviour of the runtime linker with LD_LIBRARY_PATH * The difference between LIBRARY_PATH and LD_LIBRARY_PATH

As such, this documentation is intended for \"experts\" in the runtime linker and it's behaviour, and most cases are untested. Any feedback on this topic is highly appreciated.

Building and running software on top of EESSI without EasyBuild is not straightforward and requires some considerations to take care of.

It is expected that you will have loaded all of your required dependencies as modules from the EESSI environment. Since EESSI sets LIBRARY_PATH for all of the modules and the GCC compiler is configured to use the compat layer, there should be no additional configuration required to execute a standard build process. On the other hand, EESSI does not set LD_LIBRARY_PATH so, at runtime, the executable will need help finding the libraries that it needs to actually execute. The easiest way to circumvent this requirement is by setting the environment variable LD_RUN_PATH during compile time as well. With LD_RUN_PATH set, the program will be able to tell the dynamic linker to search in those paths when the program is being executed.

EESSI uses a compatibility layer to ensure that it takes as few libraries from the host as possible. The safest way to make sure all libraries will point to the required locations in the compatibility layer (and do not leak in from the host operating system) is starting an EESSI prefix shell before building. To do this:

Warning

RPATH should never point to a compatibility layer directory, only to software layer ones, as all resolving is done via the runtime linker (ld-linux*.so) that is shipped with EESSI, which automatically searches these locations.

The biggest downside of this approach is that your executable becomes bound to the architecture you linked your libraries for, i.e., if you add to your executable RPATH a libhdf5.socompiled for intel_avx512, you will not be able to run that binary on a machine with a different architecture. If this is an issue for you, you should look into how EESSI itself organises the location of binaries and perhaps leverage the relevant environment variables (e.g., EESSI_SOFTWARE_SUBDIR).

"},{"location":"using_eessi/eessi_demos/","title":"Running EESSI demos","text":"

To really experience how using EESSI can significantly facilitate the work of researchers, we recommend running one or more of the EESSI demos.

First, clone the eessi-demo Git repository, and move into the resulting directory:

git clone https://github.com/EESSI/eessi-demo.git\ncd eessi-demo\n

The contents of the directory should be something like this:

$ ls -l\ntotal 48\ndrwxrwxr-x 2 example users  4096 May 15 13:26 Bioconductor\ndrwxrwxr-x 2 example users  4096 May 15 13:26 ESPResSo\ndrwxrwxr-x 2 example users  4096 May 15 13:26 GROMACS\n-rw-rw-r-- 1 example users 18092 Dec  5  2022 LICENSE\ndrwxrwxr-x 2 example users  4096 May 15 13:26 OpenFOAM\n-rw-rw-r-- 1 example users   543 May 15 13:26 README.md\ndrwxrwxr-x 3 example users  4096 May 15 13:26 scripts\ndrwxrwxr-x 2 example users  4096 May 15 13:26 TensorFlow\n

The directories we care about are those that correspond to particular scientific software, like Bioconductor, GROMACS, OpenFOAM, TensorFlow, ...

Each of these contains a run.sh script that can be used to start a small example run with that software. Every example takes a couple of minutes to run, even with limited resources only.

"},{"location":"using_eessi/eessi_demos/#example-running-tensorflow","title":"Example: running TensorFlow","text":"

Let's try running the TensorFlow example.

First, we need to make sure that our environment is set up to use EESSI:

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n

Change to the TensorFlow subdirectory of the eessi-demo Git repository, and execute the run.sh script:

[EESSI 2023.06] $ cd TensorFlow\n[EESSI 2023.06] $ ./run.sh\n

Shortly after starting the script you should see output as shown below, which indicates that GROMACS has started running:

Epoch 1/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.2983 - accuracy: 0.9140\nEpoch 2/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.1444 - accuracy: 0.9563\nEpoch 3/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.1078 - accuracy: 0.9670\nEpoch 4/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.0890 - accuracy: 0.9717\nEpoch 5/5\n   1875/1875 [==============================] - 3s 1ms/step - loss: 0.0732 - accuracy: 0.9772\n313/313 - 0s - loss: 0.0679 - accuracy: 0.9790 - 391ms/epoch - 1ms/step\n\nreal   1m24.645s\nuser   0m16.467s\nsys    0m0.910s\n
"},{"location":"using_eessi/eessi_in_ci/","title":"Leveraging EESSI for Continuous Integration","text":"

EESSI is already available as both a GitHub Action and a GitLab CI/CD component, which means you can easily integrate it if you use continuous integration within those ecosystems.

Note

Both of these EESSI CI tools support the use of direnv to allow you to store your desired environment within a .envrc file within your repository. See the documentation of the individual tools for detailed usage.

"},{"location":"using_eessi/eessi_in_ci/#the-eessi-github-action","title":"The EESSI GitHub Action","text":"

The EESSI GitHub Action can be found on the GitHub Marketplace, at https://github.com/marketplace/actions/eessi. Below is a minimal example of how to leverage the action, for detailed usage please refer to the official action documentation.

name: Minimal usage\non: [push, pull_request]\njobs:\n  build:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: eessi/github-action-eessi@v3\n    - name: Test EESSI\n      run: |\n        module avail\n      shell: bash\n
"},{"location":"using_eessi/eessi_in_ci/#the-eessi-gitlab-cicd-component","title":"The EESSI GitLab CI/CD component","text":"

The EESSI GitLab CI/CD component can be found in the GitLab CI/CD Catalog, at https://gitlab.com/explore/catalog/eessi/gitlab-eessi. Below is a minimal example of how to leverage the component, for detailed usage please refer to the official component documentation.

include:\n  - component: $CI_SERVER_FQDN/eessi/gitlab-eessi/eessi@1.0.5\n\nbuild:\n  stage: build\n  script:\n    - module spider GROMACS\n
"},{"location":"using_eessi/setting_up_environment/","title":"Setting up your environment","text":"

In Unix-like systems, environment variables are used to configure the environment in which applications and scripts run. To set up EESSI, you need to configure a specific set of environment variables so that your operating system is aware that EESSI exists and is to be used. We have prepared a few automated approaches that do this for you: you can either load an EESSI environment module or source an initialisation script for bash.

With any of the approaches below, the first time you use them they may seem to take a while as any necessary data is downloaded in the background from a Stratum 1 server (which is part of the CernVM-FS infrastructure used to distribute files for EESSI).

"},{"location":"using_eessi/setting_up_environment/#loading-an-eessi-environment-module","title":"Loading an EESSI environment module","text":"

There are a few different scenarios where you may want to set up the EESSI environment by loading an EESSI environment module. The simplest scenario is one where you do not already have a environment module tool on your system, in this case we configure the Lmod module tool shipped with EESSI and automatically load the EESSI environment module:

source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n
This command configures Lmod for your system and automatically loads the EESSI module so that EESSI is immediately available to use. If you would like to see what environment variables the module sets, you can use module show EESSI.

Your environment is now set up, you are ready to start running software provided by EESSI!

What if I don't use a bash shell?

The example above is shown for a bash shell but the environment module approach supports all the shells that Lmod itself supports (bash, csh, fish, ksh, zsh):

source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/csh\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/fish\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/ksh\n
source /cvmfs/software.eessi.io/versions/2023.06/init/lmod/zsh\n

What if I already have Lmod installed or another module tool is available on the system?

You can check if the module command is already defined for your system and what version it has with

command -v module && module --version\n

  1. If you are already using Lmod (modules based on Lua) with version >= 8.6:

    In this case, we recommend resetting $MODULEPATH, because EESSI is not designed to mix modules coming from EESSI and from your system.

    module unuse $MODULEPATH\nmodule use /cvmfs/software.eessi.io/init/modules\nmodule load EESSI/2023.06\n

    Your environment is now set up, you are ready to start running software provided by EESSI!

  2. If you are using an Lmod with a version older than 8.6 or any other module tool utilizing MODULEPATH (e.g., Tcl-based Environment Modules):

    It is recommended to unset $MODULEPATH to prevent Lmod from attempting to build a cache for your module tree (as this can be very slow if you have a lot of modules). Again, unsetting the $MODULEPATH should be considered as a good idea in general so you do not mix local and EESSI modules. You then will need to initialise a compatible version of Lmod, for example the one shipped with EESSI:

    unset MODULEPATH\nsource /cvmfs/software.eessi.io/versions/2023.06/init/lmod/bash\n

    Your environment is now set up, you are ready to start running software provided by EESSI!

Why do we recommend to unset MODULEPATH?

Unsetting the $MODULEPATH environment variable, which tells Lmod in which directories environment module files are available, may be necessary. The underlying reason to suggest this is that EESSI and your system are most likely based on two different operating system distributions - EESSI uses it's compatibility layer, your system almost certainly uses some other Linux distribution. If you can find a way to ensure that the software stacks from your site and EESSI do not mix (in particular when someone is building new software!), then this should be good enough.

"},{"location":"using_eessi/setting_up_environment/#sourcing-the-eessi-bash-initialisation-script","title":"Sourcing the EESSI bash initialisation script","text":"

This is supported exclusively for bash shell users. If you're using a different shell, please use the alternative approach

You can to see what your current shell is with the command echo $SHELL

You can initialise EESSI (in a non-reversible way) by running the command:

source /cvmfs/software.eessi.io/versions/2023.06/init/bash\n

You should see the following output:

Found EESSI repo @ /cvmfs/software.eessi.io/versions/2023.06!\narchdetect says x86_64/amd/zen2  # (1)\narchdetect could not detect any accelerators\nUsing x86_64/amd/zen2 as software subdirectory.\nFound Lmod configuration file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua\nFound Lmod SitePackage.lua file at /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/SitePackage.lua\nUsing /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2 as the site extension directory for installations.\nUsing /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all as the directory to be added to MODULEPATH.\nUsing /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all as the site extension directory to be added to MODULEPATH.\nFound libcurl CAs file at RHEL location, setting CURL_CA_BUNDLE\nInitializing Lmod...\nPrepending /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nPrepending site path /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all to $MODULEPATH...\nEnvironment set up to use EESSI (2023.06), have fun!\n{EESSI 2023.06} [user@system ~]$  # (2)!\n

What is reported at (1) depends on the CPU architecture of the machine you are running the source command.

At (2) is the prompt indicating that you have access to the EESSI software stack.

Your environment is now set up, you are ready to start running software provided by EESSI!

"},{"location":"blog/archive/2024/","title":"2024","text":""}]} \ No newline at end of file diff --git a/using_eessi/building_on_eessi/index.html b/using_eessi/building_on_eessi/index.html index 99dc6b38c..76a13a3fc 100644 --- a/using_eessi/building_on_eessi/index.html +++ b/using_eessi/building_on_eessi/index.html @@ -2162,7 +2162,11 @@

Manually b

Run!

-

!!! Note RPATH should never point to a compatibility layer directory, only to software layer ones, as all resolving is done via the runtime linker (ld-linux*.so) that is shipped with EESSI, which automatically searches these locations.

+
+

Warning

+

RPATH should never point to a compatibility layer directory, only to software layer ones, as all resolving is done via the runtime linker (ld-linux*.so) +that is shipped with EESSI, which automatically searches these locations.

+

The biggest downside of this approach is that your executable becomes bound to the architecture you linked your libraries for, i.e., if you add to your executable RPATH a libhdf5.socompiled for intel_avx512, you will not be able to run that binary on a machine with a different architecture. If this is an issue for you, you should look into how EESSI itself organises the location of binaries and perhaps leverage the relevant environment variables (e.g., EESSI_SOFTWARE_SUBDIR).