-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CUDA support #172
Add CUDA support #172
Conversation
CUDA itself is not shipped by EESSI and has to be installed on the host. The scripts perform various checks and download and install the CUDA compat libs. Modules with CUDA as a dependecy are hidden in Lmod, unless the CUDA compat libs are installed which is only done when CUDA itself is installed on the host. This aspect still has to be tested with an updated Lmod version in the EESSI compat layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Encouraging!
or ec_dict["toolchain"]["name"] in CUDA_ENABLED_TOOLCHAINS | ||
): | ||
key = "modluafooter" | ||
value = 'add_property("arch","gpu")' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think gpu
is a recognised property in Lmod so a good choice for now. Once we add AMD support it will get more complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add a new property by extending the property table propT
. To do so, we could add a file init/lmodrc.lua
with a new property. This file can be loaded using the env var $LMOD_RC
. Unfortunately, we do not seem to be able to add entries to arch
but rather have to add a new property (or find a way to extend arch
that I'm missing).
gpu_support/add_gpu_support.sh
Outdated
# TODO: needs more thorough testing | ||
os_family=$(uname | tr '[:upper:]' '[:lower:]') | ||
|
||
# Get OS version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@boegel Does EB do this already, can we hook into that?
gpu_support/test_cuda
Outdated
module use /cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/modules/all/ | ||
module load CUDA | ||
tmp_dir=$(mktemp -d) | ||
cp -r $EBROOTCUDA/samples $tmp_dir |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CUDA no longer ships samples, there is a plan to ship the (compiled) CUDA samples with EESSI
There are indications that we may be allowed to ship the CUDA runtime with EESSI, that would mean we wouldn't (necessarily) need to install CUDA unless people want to actually build their own software on top of EESSI. I would go with this PR as is right now, but in a future pilot we should make that installation optional (making another code branch that only creates the software installation directory so that the Lmod plugin will still work). In that scenario though, we will need some major tweaking to the CUDA module shipped with EESSI, it would need conditionals based on what is available on the host. We'd also probably want to shadow |
Only install cuda compat libs when either they are not installed yet or they are outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's time to remove the WIP label and get someone else feedback on this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allow using an environment variable to skip GPU checks
module load EasyBuild | ||
# we need the --rebuild option, since the module file is shipped with EESSI | ||
tmpdir=$(mktemp -d) | ||
eb --rebuild --installpath-modules=${tmpdir} --installpath=${cuda_install_dir}/ CUDA-${install_cuda_version}.eb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now this makes testing difficult as the actual CUDA module in EESSI is not available. You might be better off here checking if the CUDA module exists and if so prefixing this command with EASYBUILD_INSTALLPATH_MODULES=${tmpdir}
# The rpm and deb files contain the same libraries, so we just stick to the rpm version. | ||
# If p7zip is missing from the software layer (for whatever reason), we need to install it. | ||
# This has to happen in host_injections, so we check first if it is already installed there. | ||
module use ${cuda_install_dir}/modules/all/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe do this conditionally (i.e, only if this directory exists)
# we need the --rebuild option, since the module file is shipped with EESSI | ||
tmpdir=$(mktemp -d) | ||
eb --rebuild --installpath-modules=${tmpdir} --installpath=${cuda_install_dir}/ CUDA-${install_cuda_version}.eb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# we need the --rebuild option, since the module file is shipped with EESSI | |
tmpdir=$(mktemp -d) | |
eb --rebuild --installpath-modules=${tmpdir} --installpath=${cuda_install_dir}/ CUDA-${install_cuda_version}.eb | |
# we need the --rebuild option and a random dir for the module if the module file is shipped with EESSI | |
if [ -f ${EESSI_SOFTWARE_PATH}/modules/all/CUDA/${install_cuda_version}.lua ]; then | |
tmpdir=$(mktemp -d) | |
extra_args="--rebuild --installpath-modules=${tmpdir}" | |
fi | |
eb ${extra_args} --installpath=${cuda_install_dir}/ CUDA-${install_cuda_version}.eb |
gpu_support/test_cuda.sh
Outdated
echo "Cannot test CUDA, modules path does not exist, exiting now..." | ||
exit 1 | ||
fi | ||
module load CUDA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably load the specific version of CUDA here.
For me to get this working out of the box right now I needed
|
The discussion today at the EESSI Community Meeting, led to the following design suggestion/comments:
The benefit of this approach is that we only install the CUDA SDK if the user actually wants it. It will greatly speed up the GPU support script since there will be need for any |
Whitelisted CUDA libraries can now be shipped with EESSI. The other libraries and files are replaced with symlinks to host_injections. A compiled CUDA sample can now also be shipped with EESSI. This is relevant if users only need the runtime capabilities and not the whole CUDA suite (which would include the compilers). It is now possible to solely install the compat libs as a user and get access to the runtime environment this way. It is still possible to also install the whole CUDA suite. CUDA enabled modules with the gpu property now only load if the compat libs are installed in host_injections.
The CUDA version needed for modules are now written as envvars that will be exported into the module files. The CUDA version for which we have the current compat libs installed is saved in a txt file in ../host_injections/nvidia/latest/version.txt The lmod hook called when loading modules with the gpu property now compares these two versions and exits out if the installed version needs to be updated.
The fix for removing the temporary test dir is needed when cloning the samples from github, i.e. for CUDA > 11.6.0. Otherwise, the script call from the eb hook will get stuck.
############################################################################################### | ||
# Install CUDA | ||
cuda_install_dir="${EESSI_SOFTWARE_PATH/versions/host_injections}" | ||
mkdir -p ${cuda_install_dir} | ||
if [ "${install_cuda}" != false ]; then | ||
bash $(dirname "$BASH_SOURCE")/cuda_utils/install_cuda.sh ${install_cuda_version} ${cuda_install_dir} | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's break this into separate script (and PR) since it will be needed by #212
You also need to check the exit code on the creation of cuda_install_dir
since this may fail
bash $(dirname "$BASH_SOURCE")/cuda_utils/install_cuda.sh ${install_cuda_version} ${cuda_install_dir} | ||
fi | ||
############################################################################################### | ||
# Prepare installation of CUDA compat libraries, i.e. install p7zip if it is missing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can drop stuff related to p7zip because of #212 (and that also means we can drop prepare_cuda_compatlibs.sh
entirely)
# Otherwise, give up | ||
bash $(dirname "$BASH_SOURCE")/cuda_utils/install_cuda_compatlibs_loop.sh ${cuda_install_dir} ${install_cuda_version} | ||
|
||
cuda_version_file="/cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/latest/version.txt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cuda_version_file="/cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/latest/version.txt" | |
cuda_version_file="/cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/latest/eessi_compat_version.txt" |
I also think that this creation should be part of install_cuda_compatlibs_loop.sh
and we should put the supported CUDA version in there according to the compat libs, not the version we need (will help us to avoid unnecessary updates in the future).
install_cuda_version=$1 | ||
cuda_install_dir=$2 | ||
|
||
# TODO: Can we do a trimmed install? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done now via your hook
#!/bin/bash | ||
|
||
install_cuda_version=$1 | ||
cuda_install_dir=$2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General CUDA installation is done via #212 now so I don't think you need this argument. This script is only about installing the CUDA package under host_injections
...but changing the name of the script to reflect that is probably a good idea.
# This is only relevant for users, the shipped CUDA installation will | ||
# always be in versions instead of host_injections and have symlinks pointing | ||
# to host_injections for everything we're not allowed to ship | ||
if [ -f ${cuda_install_dir}/software/CUDA/${install_cuda_version}/EULA.txt ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The if/else is still good, except we should be checking under the host_injections
path. This will allow us to skip any check on available space etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should construct cuda_install_dir
rather than take it as an argument
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, we should allow for a forced installation to override this check
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It prefer the we ship the EULA text so I think we should check for an expected broken symlink here:
if [[ -L "${cuda_install_dir}/software/CUDA/bin/nvcc" && -e "${cuda_install_dir}/software/CUDA/bin/nvcc" ]]; then
avail_space=$(df --output=avail ${cuda_install_dir}/ | tail -n 1 | awk '{print $1}') | ||
if (( ${avail_space} < 16000000 )); then | ||
echo "Need more disk space to install CUDA, exiting now..." | ||
exit 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a tricky one, we need space for sources, space for the build, space for the install but people can choose where to put all these. I guess we leave it as is for now, but allow people to set an envvar to override this check (and tell them that envvar in the error message)
# install cuda in host_injections | ||
module load EasyBuild | ||
# we need the --rebuild option and a random dir for the module if the module file is shipped with EESSI | ||
if [ -f ${EESSI_SOFTWARE_PATH}/modules/all/CUDA/${install_cuda_version}.lua ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this script is standalone, we'll need to guarantee EESSI_SOFTWARE_PATH
is defined.
extra_args="--rebuild --installpath-modules=${tmpdir}" | ||
fi | ||
eb ${extra_args} --installpath=${cuda_install_dir}/ CUDA-${install_cuda_version}.eb | ||
ret=$? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's import the bash functions defined in utils.sh
and use them throughout (where appropriate)
@huebner-m Let's branch out a separate PR for the script to install CUDA under |
if [ -w /cvmfs/pilot.eessi-hpc.org/host_injections ]; then | ||
mkdir -p ${host_injections_dir} | ||
else | ||
echo "Cannot write to eessi host_injections space, exiting now..." >&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's start using utils.sh
here
fi | ||
cd ${host_injections_dir} | ||
|
||
# Check if our target CUDA is satisfied by what is installed already |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know what our target CUDA version is at this point. And if the nvidia-smi
result is good enough, what then? I guess we should check that this version comes from an installation of the compat-libs otherwise we still need to install compat libs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the supported CUDA version is new enough and comes from an EESSI installation of the CUDA compat libs, we can already exit gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should leverage the contents of /cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/latest/eessi_compat_version.txt
here
@@ -0,0 +1,92 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be a compat layer bash?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine as is, as long as the first thing we do is source the EESSI environment
# p7zip is installed under host_injections for now, make that known to the environment | ||
if [ -d ${cuda_install_dir}/modules/all ]; then | ||
module use ${cuda_install_dir}/modules/all/ | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can drop this
fi | ||
|
||
# Create the space to host the libraries | ||
mkdir -p ${host_injection_linker_dir} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should always check exit codes on our commands, seems like a function that does that for us is needed
if [ -d ${cuda_install_dir}/modules/all ]; then | ||
module use ${cuda_install_dir}/modules/all/ | ||
else | ||
echo "Cannot load CUDA, modules path does not exist, exiting now..." | ||
exit 1 | ||
fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drop this, no need for the module use, CUDA-Samples is shipped with EESSI. Our Lmod hook should cause the load of (a specific version of) CUDA-Samples
(not CUDA
since we only deal with compat libs here) to fail unless the compat libs are in place (i.e. Lmod should ensure the existence of /cvmfs/pilot.eessi-hpc.org/host_injections/nvidia/latest/eessi_compat_version.txt
)
exit 1 | ||
else | ||
echo "Successfully loaded CUDA, you are good to go! :)" | ||
echo " - To build CUDA enabled modules use ${EESSI_SOFTWARE_PATH/versions/host_injections} as your EasyBuild prefix" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not required, then can build where they like (but it is a very sensible location!)
@@ -0,0 +1,31 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script can be dropped
@@ -0,0 +1,82 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This script can be greatly simplified, just load CUDA-Samples
and see does deviceQuery
run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can expand the testing from there.
@huebner-m Break the two compat libs scripts off into another PR, the testing script into another, the lmod hook into another and the docs into another (we can finalise those once the others are ready) |
GPU support implemented with #434 |
This WIP PR follows up on the work done in the Hackathons.
The following features have been implemented:
nvidia-smi
)host_injections
is a writable pathhost_injections
(using the appropriate rpm or deb files and tools)host_injections
is sufficient to install CUDAgpu
(based on https://github.com/easybuilders/JSC/blob/2022/Custom_Hooks/eb_hooks.py#L335)gpu
if CUDA is not installedOpen tasks:
Related issues: