HTCondor
Developer(s) | University of Wisconsin–Madison |
---|---|
Stable release | 8.4.6 Stable / April 21, 2016 |
Preview release | 8.5.3 / March 24, 2016 |
Operating system | Microsoft Windows, Mac OS X, Linux, FreeBSD |
Type | High-Throughput Computing |
License | Apache License 2.0 |
Website |
research |
HTCondor is an open-source high-throughput computing software framework for coarse-grained distributed parallelization of computationally intensive tasks.[1] It can be used to manage workload on a dedicated cluster of computers, and/or to farm out work to idle desktop computers – so-called cycle scavenging. HTCondor runs on Linux, Unix, Mac OS X, FreeBSD, and contemporary Windows operating systems. HTCondor can seamlessly integrate both dedicated resources (rack-mounted clusters) and non-dedicated desktop machines (cycle scavenging) into one computing environment.
HTCondor was formerly known as Condor; the name was changed in October 2012 to resolve a trademark lawsuit.[2]
HTCondor is developed by the HTCondor team at the University of Wisconsin–Madison and is freely available for use. HTCondor follows an open source philosophy (it's licensed under the Apache License 2.0).[3] It can be downloaded from the HTCondor web site or by installing the Fedora Linux Distribution. It is also available on other platforms, like Ubuntu from the repositories.
By way of example, the NASA Advanced Supercomputing facility (NAS) HTCondor pool consists of approximately 350 SGI and Sun workstations purchased and used for software development, visualization, email, document preparation, etc. Each workstation runs a daemon that watches user I/O and CPU load. When a workstation has been idle for two hours, a job from the batch queue is assigned to the workstation and will run until the daemon detects a keystroke, mouse motion, or high non-HTCondor CPU usage. At that point, the job will be removed from the workstation and placed back on the batch queue.
HTCondor can run both sequential and parallel jobs. Sequential jobs can be run in several different "universes", including "vanilla" which provides the ability to run most "batch ready" programs, and "standard universe" in which the target application is re-linked with the HTCondor I/O library which provides for remote job I/O and job checkpointing. HTCondor also provides a "local universe" which allows jobs to run on the "submit host".
In the world of parallel jobs, HTCondor supports the standard MPI and PVM (Goux, et al. 2000) in addition to its own Master Worker "MW" library for extremely parallel tasks.
HTCondor-G allows HTCondor jobs to use resources not under its direct control. It is mostly used to talk to Grid and Cloud resources, like pre-WS and WS Globus, Nordugrid ARC, UNICORE and Amazon EC2. But it can also be used to talk to other batch systems, like Torque/PBS and LSF. Support for Sun Grid Engine is currently under development as part of the EGEE project.
HTCondor supports the DRMAA job API. This allows DRMAA compliant clients to submit and monitor HTCondor jobs. The SAGA C++ Reference Implementation provides an HTCondor plug-in (adaptor), which makes HTCondor job submission and monitoring available via SAGA's Python and C++ APIs.
Other HTCondor features include "DAGMan" which provides a mechanism to describe job dependencies.
HTCondor is one of the job scheduler mechanisms supported by GRAM (Grid Resource Allocation Manager), a component of the Globus Toolkit.
HTCondor was the scheduler software used to distribute jobs for the first draft assembly of the Human Genome.
While HTCondor makes good use of unused computing time, leaving computers turned on for use with HTCondor will increase energy consumption and associated costs. The University of Liverpool[4] has demonstrated an effective solution for this problem using a mixture of Wake-on-LAN and commercial power management PowerMAN (Software).[5] Starting from version 7.1.1, HTCondor can hibernate and wake machines based on user-specified policies without the need for third-party software.
See also
References
- ↑ Thain, Douglas; Tannenbaum, Todd; Livny, Miron (2005). "Distributed Computing in Practice: the Condor Experience" (PDF). Concurrency and Computation: Practice and Experience 17 (2–4): 323–356. doi:10.1002/cpe.938.
- ↑ Tannenbaum, Todd. ""Condor" name changing to "HTCondor"". Retrieved 11 March 2013.
- ↑ HTCondor License Agreement
- ↑ University of Liverpool Condor Project
- ↑ University of Liverpool case study with Data Synergy PowerMAN software