Slurm Workload Manager

Slurm
Stable release 15.08
Development status active
Written in C
Operating system Linux, AIX, BSDs, Mac OS X, Solaris
Type Job Scheduler for Clusters and Supercomputers
License GNU General Public License
Website slurm.schedmd.com

The Slurm Workload Manager (formally known as Simple Linux Utility for Resource Management (SLURM)), or Slurm for short, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters. It provides three key functions. First, it allocates exclusive and/or non-exclusive access to resources (computer nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (typically a parallel job such as MPI) on a set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending jobs.

Slurm is the workload manager on about 60% of the TOP500 supercomputers, including Tianhe-2 that (as of June 2014) is the world's fastest computer.

Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[1]

History

Slurm began development as a collaborative effort primarily by Lawrence Livermore National Laboratory, SchedMD,[2] Linux NetworX, Hewlett-Packard, and Groupe Bull as a Free Software resource manager. It was inspired by the closed source Quadrics RMS and shares a similar syntax. The name is a reference to the soda in Futurama.[3] Over 100 people around the world have contributed to the project. It has since evolved into a sophisticated batch scheduler capable of satisfying the requirements of many large computer centers.

As of November 2015, TOP500 list of most powerful computers in the world indicates that Slurm is the workload manager on six of the top ten systems. Some of the systems in the top ten running Slurm include Tianhe-2, a 33.86 PetaFlop system at NUDT, IBM Sequoia, an IBM Bluegene/Q with 1.57 million cores and 17.2 Petaflops at Lawrence Livermore National Laboratory; Piz Daint a 7.78 PetaFlop Cray computer at Swiss National Supercomputing Centre, Stampede, a 5.17 PetaFlop Dell computer at the Texas Advance Computing Center;[4] and Vulcan, a 4.29 Petaflop IBM Bluegene/Q at Lawrence Livermore National Laboratory;.[5]

Structure

Slurm's design is very modular with about 100 optional plugins. In its simplest configuration, it can be installed and configured in a couple of minutes. More sophisticated configurations provide database integration for accounting, management of resource limits and workload prioritization. Slurm also works with several meta-schedulers such as Moab Cluster Suite, Maui Cluster Scheduler, and Platform LSF.

Notable features

Notable Slurm features include the following:

The following features are announced for version 14.11 of Slurm, was released in November 2014:[6]

Supported platforms

While Slurm was originally written for the Linux kernel, the latest version supports many other operating systems, including AIX, BSDs (FreeBSD, NetBSD and OpenBSD), Linux, Mac OS X, and Solaris.[7] Slurm also supports several unique computer architectures, including:

License

Slurm is available under the GNU General Public License V2.

Commercial support

In 2010, the developers of Slurm founded SchedMD, which maintains the canonical source, provides development, level 3 commercial support and training services. Commercial support is also available from Bright Computing, Bull. Cray, and Science + Computing

References

  1. Pascual, Jose Antonio; Navaridas, Javier; Miguel-Alonso, Jose (2009). "Job Scheduling Strategies for Parallel Processing". Lecture Notes in Computer Science 5798: 138–144. doi:10.1007/978-3-642-04633-9_8. ISBN 978-3-642-04632-2. |chapter= ignored (help)
  2. "Slurm Commercial Support, Development, and Installation". SchedMD. Retrieved 2014-02-23.
  3. "SLURM: Simple Linux Utility for Resource Management" (PDF). 23 June 2003. Retrieved 11 January 2016.
  4. "Texas Advanced Computing Center - Home". Tacc.utexas.edu. Retrieved 2014-02-23.
  5. Donald B Johnston (2010-10-01). "Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology". Llnl.gov. Retrieved 2014-02-23.
  6. "Slurm - What's New". SchedMD. Retrieved 2014-08-29.
  7. SLURM Platforms

Further reading

External links

This article is issued from Wikipedia - version of the Saturday, May 07, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.