在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:hlrs-tasc/julia-on-hpc-systems开源软件地址:https://github.com/hlrs-tasc/julia-on-hpc-systems开源编程语言:开源软件介绍:Julia on HPC systemsThe purpose of this repository is to document best practices for running Julia on HPC systems (i.e., "supercomputers"). At the moment, both information relevant for supercomputer operators as well as users is collected here. There is no guarantee for permanence or that information here is up-to-date, neither for a useful ordering and/or categorization of issues. For operatorsOfficial Julia binaries vs. building from sourceAccording to this Discourse post, the difference between compiling Julia from source with architecture-specific optimization and using the official Julia binaries is negligible. This has been confirmed by Ludovic Räss for an Nvidia DGX-1 system at CSCS, where also no performance differences between a Spack-installed version and the official binaries were found (April 2022). Since installing from source using, e.g., Spack, can sometimes be cumbersome, the general recommendation is to go with the pre-built binaries unless benchmarked and found to be different. This is also the current approach taken at NERSC, CSCS, and PC2. In June 2022, a new Julia PR was created (JuliaLang/julia#45641) that aims to add PGO (profile-guided optimization) and LTO (link-time optimization) to the Julia Last update: June 2022 Ensure correct libraries are loadedWhen using Julia on a system that uses an environment-variable based module
system (such as modules or
Lmod), the One possibility to achieve this is to create a wrapper shell script that
modifies #!/usr/bin/env bash
# This wrapper makes sure the julia binary distributions picks up the GCC
# libraries provided with it correctly meaning that it does not rely on
# the gcc-libs version.
# Dr Owain Kenway, 20th of July, 2021
# Source: https://github.com/UCL-RITS/rcps-buildscripts/blob/04b2e2ccfe7e195fd0396b572e9f8ff426b37f0e/files/julia/julia.sh
location=$(readlink -f $0)
directory=$(readlink -f $(dirname ${location})/..)
export LD_LIBRARY_PATH=${directory}/lib/julia:${LD_LIBRARY_PATH}
exec ${directory}/bin/julia "$@" Note that using Also note that fixing the Last update: April 2022 Julia depot pathSince the available file systems can differ significantly between HPC centers, it is hard to make a general statement about where the Julia depot folder (by default on
Unix-like systems:
On some systems, it resides in the user's home directory (e.g. at NERSC). On other systems, it is put on a parallel scratch file system (e.g. CSCS and PC2). At the time of writing (April 2022), there does not seem to be reliable performance data available that could help to make a data-based decision. If multiple platforms, e.g., systems with different architecture, would access the same Julia depot, for example because the file system is shared, it might
make sense to create platform-dependend Julia depots by setting the
where MPI.jlIt is generally recommended to set
such that MPI.jl will always use a system MPI instead of the Julia artifact (i.e. MPI_jll.jl). For more configuration options see this part of the MPI.jl documentation. Additionally, on the NERSC systems, there is a pre-built MPI.jl for each programming environment, which is loaded through a settings module. More information on the NERSC module file setup can be found here. CUDA.jlIt seems to be generally advisable to set the environment variables JULIA_CUDA_USE_BINARYBUILDER=false
JULIA_CUDA_USE_MEMORY_POOL=none in the module files when loading Julia on a system with GPUs. Otherwise, Julia will try to download its own BinaryBuilder.jl-provided CUDA stack, which is typically not what you want on a production HPC system. Instead, you should make sure that Julia finds the local CUDA installation by setting relevant environment variables (see also the CUDA.jl docs). Disabling the memory pool is advisable to make CUDA-aware MPI work on multi-GPU nodes (see also the MPI.jl docs). Modules file setupJohannes Blaschke provides scripts and
templates to set up modules file for Julia on some of NERSC's systems: There are a number of environment variables that should be considered to be set through the module mechanism:
Easybuild resourcesSamuel Omlin and colleagues from CSCS provide their Easybuild configuration files used for Piz Daint online at https://github.com/eth-cscs/production/tree/master/easybuild/easyconfigs/j/Julia. For example, there are configurations available for Julia 1.7.2 and for Julia 1.7.2 with CUDA support. Looking at these files also helps to decide which kind of environment variables are useful to set. Further resources
For usersHPC systems with Julia supportWe maintain an (incomplete) list of HPC systems that provide a Julia installation and/or support for using Julia to its users. For this, we use the following nomenclature:
Australasia
Europe
North America
Other HPC systemsThere are a number of other HPC systems that have been reported to provide a Julia installation and/or Julia support, but lack enough details to be put on the list above:
License and contributingThe contents of this repository are published under the MIT license (see LICENSE). Our main goal is to publicly curate information on using Julia on HPC systems, as a service from the community and for the community. Therefore, we are very happy to accept contributions from everyone, preferably in the form of a PR. AuthorsThis repository is maintained by Michael Schlottke-Lakemper (University of Stuttgart, Germany). The following people have provided valuable contributions, either in the form of PRs or via private communication:
DisclaimerEverything is provided as is and without warranty. Use at your own risk! |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论