MADNESS

MADNESS
Original author(s) George Fann, Robert J. Harrison
Developer(s) Oak Ridge National Laboratory, Stony Brook University, Virginia Tech, Argonne National Laboratory
Initial release Forthcoming
Operating system Cross-platform
Available in C++
Type Scientific simulation software
License GNU GPL v2
Website github.com/m-a-d-n-e-s-s/madness

MADNESS (Multiresolution Adaptive Numerical Environment for Scientific Simulation) is a high-level software environment for the solution of integral and differential equations in many dimensions using adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis [1] [2] and separated representations .[3]

There are three main components to MADNESS. At the lowest level is a petascale parallel programming environment [4] that aims to increases programmer productivity and code performance/scalability while maintaining backward compatibility with current programming tools such as the message-passing interface and Global Arrays. The numerical capabilities built upon the parallel tools provide a high-level environment for composing and solving numerical problems in many (1-6+) dimensions. Finally, built upon the numerical tools are new applications with initial focus upon chemistry ,[5] [6] , atomic and molecular physics ,[7] material science, and nuclear structure. It is open source, has an object-oriented design, and is designed to be a parallel processing program for computers with up to millions of cores running already on the Cray XT5 at Oak Ridge National Laboratory and the IBM Blue Gene at Argonne National Laboratory. The small matrix multiplication (relative to large, BLAS-optimized matrices) is the primary computational kernel in MADNESS; thus, an efficient implement on modern CPUs is an ongoing research effort. [8] .[9] Adapting the irregular computation in MADNESS to heterogeneous platforms is nontrivial due to the size of the kernel, which is too small to be offloaded via compiler directives (e.g. OpenACC), but has been demonstrated for CPUGPU systems .[10] Intel has publicly stated that MADNESS is one of the codes running on the Intel MIC architecture [11] [12] but no performance data has been published yet.

MADNESS' chemistry capability includes Hartree–Fock and density functional theory in chemistry [13] [14] (including analytic derivatives ,[15] response properties [16] and time-dependent density functional theory with asymptotically corrected potentials [17]) as well as nuclear density functional theory[18] and Hartree–FockBogoliubov theory. [19][20] MADNESS and BigDFT are the two most widely known codes that perform DFT and TDDFT using wavelets .[21] Many-body wavefunctions requiring six-dimensional spatial representations are also implemented (e.g. MP2[22]). The parallel runtime inside of MADNESS has been used to implement a wide variety of features, including graph optimization .[23] From a mathematical perspective, MADNESS emphasizes rigorous numerical precision without loss of computational performance .[24] This is useful not only in quantum chemistry and nuclear physics, but also the modeling of partial differential equations .[25]

MADNESS was recognized by the R&D 100 Awards in 2011.[26][27] It is an important code to Department of Energy supercomputing sites and is being used by both the leadership computing facilities at Argonne National Laboratory[28] and Oak Ridge National Laboratory[29] to evaluate the stability and performance of their latest supercomputers. It has users around the world, including the United States and Japan .[30] MADNESS has been a workhorse code for computational chemistry in the DOE INCITE program [31] at the Oak Ridge Leadership Computing Facility [32] and is noted as one of the important codes to run on the Cray Cascade architecture.[33]

See also

References

  1. Beylkin, Gregory; Fann, George; Harrison, Robert J.; Kurcz, Christopher; Monzón, Lucas (2012). "Multiresolution representation of operators with boundary conditions on simple domains". Applied and Computational Harmonic Analysis 33 (1): 109–139. doi:10.1016/j.acha.2011.10.001.
  2. Fann, George; Beylkin, Gregory; Harrison, Robert J.; Jordan, Kirk E. (2004). "Singular operators in multiwavelet bases". IBM Journal of Research and Development 48 (2): 161–171. doi:10.1147/rd.482.0161.
  3. Beylkin, Gregory; Cramer, Robert; Fann, George; Harrison, Robert J. (2007). "Multiresolution separated representations of singular and weakly singular operators". Applied and Computational Harmonic Analysis 23 (2): 235–253. doi:10.1016/j.acha.2007.01.001.
  4. Thornton, W. Scott; Vence, Nicholas; Harrison, Robert E. (2009). "Introducing the MADNESS numerical framework for petascale computing" (PDF). Proceedings of the Cray User Group Conference.
  5. Fosso-Tande, Jacob; Harrison, Robert (2013). "Implicit solvation models in a multiresolution multiwavelet basis". Chemical Physics Letters. 561-562: 179-184. doi:10.1016/j.cplett.2013.01.065.
  6. Fosso-Tande, Jacob; Harrison, Robert (2013). "Confinement effects of solvation on a molecule physisorbed on a polarizable continuum particle". Computational and Theoretical Chemistry 1017: 22-30. doi:10.1016/j.comptc.2013.05.006.
  7. Vence, Nicholas; Harrison, Robert; Krstic, Predrag (2012). "Attosecond electron dynamics: A multiresolution approach". Physical Review A 85 (3): 0303403. doi:10.1103/PhysRevA.85.033403.
  8. Stock, Kevin; Henretty, Thomas; Murugandi, I.; Sadayappan, P.; Harrison, Robert J. (2011). "Model-Driven SIMD Code Generation for a Multi-resolution Tensor Kernel". Proceedings of the IEEE International Parallel Distributed Processing Symposium (IPDPS): 1058–1067. doi:10.1109/IPDPS.2011.101.
  9. Shin, Jaewook; Hall, Mary W.; Chame, Jacqueline; Chen, Chun; Hovland, Paul D. (2009). "Autotuning and specialization: Speeding up matrix multiply for small matrices with compiler technology" (PDF). Proceedings of the Fourth International Workshop on Automatic Performance Tuning.
  10. Slavici, Vlad; Varier, Raghu; Cooperman, Gene; Harrison, Robert J. (September 2012). "Adapting Irregular Computations to Large CPU-GPU Clusters in the MADNESS Framework" (PDF). Proceedings of the IEEE International Conference on Cluster Computing (CLUSTER): 1–9. doi:10.1109/CLUSTER.2012.42.
  11. James Reinders (20 September 2012). "Intel Xeon Phi coprocessor support by software tools".
  12. Timothy Prickett Morgan (16 November 2011). "Hot Intel teraflops MIC coprocessor action in a hotel".
  13. Harrison, Robert J.; Fann, George I.; Yanai, Takeshi; Gan, Zhengting; Beylkin, Gregory (2004). "Multiresolution quantum chemistry: Basic theory and initial applications". The Journal of Chemical Physics 121 (23): 11587–11598. doi:10.1063/1.1791051. PMID 15634124.
  14. Yanai, Takeshi; George I., Fann; Gan, Zhengting; Harrison, Robert J.; Beylkin, Gregory (2004). "Multiresolution quantum chemistry: Hartree-Fock exchange". The Journal of Chemical Physics 121 (14): 6680–6688. doi:10.1063/1.1790931.
  15. Yanai, Takeshi; George I., Fann; Gan, Zhengting; Harrison, Robert J.; Beylkin, Gregory (2004). "Multiresolution quantum chemistry: Analytic derivatives for Hartree--Fock and density functional theory". The Journal of Chemical Physics 121 (7): 2866–2876. doi:10.1063/1.1768161.
  16. Sekino, Hideo; Maeda, Yasuyuki; Yanai, Takeshi; Harrison, Robert J. (2008). "Basis set limit Hartree--Fock and density functional theory response property evaluation by multiresolution multiwavelet basis". The Journal of Chemical Physics 129 (3): 034111–034117. doi:10.1063/1.2955730. PMID 18647020.
  17. Yanai, Takeshi; Harrison, Robert J.; Handy, Nicholas C. (2005). "Multiresolution quantum chemistry in multiwavelet bases: time-dependent density functional theory with asymptotically corrected potentials in local density and generalized gradient approximations". Molecular Physics 103 (2-3): 413–424. doi:10.1080/00268970412331319236.
  18. "UNEDF SciDAC Collaboration Universal Nuclear Energy Density Functional".
  19. Pei, J.C.; Fann, G.I.; Harrison, R.J.; Nazarewicz, W.; Hill, J.; Galindo, D.; Jia, J. (2012). "Coordinate-Space Hartree-Fock-Bogoliubov Solvers for Superfluid Fermi Systems in Large Boxes". arXiv:1204.5254 [nucl-th].
  20. Pei, J. C.; Stoitsov, M. V.; Fann, G. I.; Nazarewicz, W.; Schunck, N.; Xu, F. R. (December 2008). "Deformed coordinate-space Hartree-Fock-Bogoliubov approach to weakly bound nuclei and large deformations". Physical Review C 78 (6): 064306–064317. doi:10.1103/PhysRevC.78.064306.
  21. Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y. (2012). "Wavelet-based linear-response time-dependent density-functional theory". Chemical Physics 402 (0): 29–40. doi:10.1016/j.chemphys.2012.03.024.
  22. Bischoff, Florian A.; Harrison, Robert J.; Valeev, Edward F. (2012). "Computing many-body wave functions with guaranteed precision: The first-order Moller-Plesset wave function for the ground state of helium atom". The Journal of Chemical Physics 137 (10): 104103–104112. doi:10.1063/1.4747538.
  23. Sullivan, Blair D.; Weerapurage, Dinesh P.; Groer, Christopher S. (2012). Parallel Algorithms for Graph Optimization using Tree Decompositions (Technical report). doi:10.2172/1042920.
  24. Harrison, Robert J.; Fann, George I. (2007). "SPEED and PRECISION in QUANTUM CHEMISTRY". SciDAC Review 1 (3): 54–65.
  25. Reuter, Matthew G.; Hill, Judith C.; Harrison, Robert J. (2012). "Solving PDEs in irregular geometries with multiresolution methods I: Embedded Dirichlet boundary conditions". Computer Physics Communications 183 (1): 1–7. doi:10.1016/j.cpc.2011.07.001.
  26. "Free framework for scientific simulation". R&D Magazine. 14 August 2011. Retrieved November 26, 2012.
  27. "MADNESS Named R&D 100 Winner".
  28. "Accurate Numerical Simulations Of Chemical Phenomena Involved in Energy Production and Storage with MADNESS and MPQC".
  29. "Application Readiness at ORNL" (PDF).
  30. "Far from home - Japanese graduate student journeys to UT to study computational chemistry".
  31. "Chemistry and Materials Simulations Speed Clean Energy Production and Storage". 1 June 2011.
  32. Bland, A.; Kendall, R.; Kothe, D.; Rogers, J.; Shipman, G. (2010). "Jaguar: The world’s most powerful computer" (PDF). Proceedings of the Cray User Group Conference.
  33. "Cray unveils 100 petaflop XC30 supercomputer". 8 November 2012.

External links

This article is issued from Wikipedia - version of the Monday, September 28, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.