GROMACS

From HP-SEE Wiki

Revision as of 13:02, 29 July 2011 by Roczei (Talk | contribs)
Jump to: navigation, search

Contents

Information

Authors/Maintainers

Summary

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the non-bonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers. It supports all the usual algorithms you expect from a modern molecular dynamics implementation, but there are also quite a few features that make it stand out from the competition. Provides extremely high performance compared to all other programs. A lot of algorithmic optimizations have been introduced in the code. GROMACS contains several state-of-the-art algorithms that make it possible to extend the time steps in simulations significantly, and thereby further enhance performance without sacrificing accuracy or detail. GROMACS can be run in parallel, using standard MPI communication.

Features

Architectural/Functional Overview

  • high level design info, how it works, performance - may be a link, or several links

Usage Overview

Dependencies

  • An ANSI C compiler, and possibly Fortran. GROMACS can be compiled entirely in C, which means you should be able to get it running on essentially any UNIX-style computer in the world. However prior to version 4.5, we also provide the innermost loops for some platforms in Fortran to improve performance, so we strongly recommend you to use a Fortran compiler if you can - it makes a huge difference! For modern Intel and AMD processors we provide even faster assembly loops though, so for those you can skip Fortran.
  • Where assembly loops are in use, GROMACS performance is largely independent of the compiler used. However the GCC 4.1.x series of compilers are broken for GROMACS, and these are provided with some commodity Linux clusters. Do not use these compilers!
  • If you want to run in parallel across a network, you need MPI. If you are running on a supercomputer you probably already have an optimized MPI version installed - consult your documentation or ask your system administrator. See below for information about how to make use of MPI. As of GROMACS 4.5, threading is supported, so for e.g multi-core workstations, MPI is no longer required.
  • You need an FFT library to perform Fourier transforms. Its precision (double vs single) needs to match the precision with which you intend to compile GROMACS. Recent versions support FFTW-2.1.x, FFTW-3.x (different interface from FFTW version 2), Intel Math Kernel (MKL) library version 6.0 and later, and we also have a slower built-in version of FFTPACK in case you really don't want to install a good (free) library. We currently recommend that you use FFTW (see http://www.fftw.org) since it is the most tested one, and in addition it is also free and faster than the alternatives. The parallel transforms in FFTW haven't been used since GROMACS 3.3, so don't worry about MPI versions of the FFTW libraries.
  • CMAKE

HP-SEE Applications

  • FMD-PA

Resource Centers

  • BG, BG
  • HPCG, BG
  • NCIT-Cluster, RO
  • NIIFI SC, HU

Usage by other projects and communities

  • If any

Recommendations for Configuration and Usage

Please describe here any common settings, configurations or conventions that would make the usage of this resource (library or tool) more interoperable or scalable across the HP-SEE resources. These recommendations should include anything that is related to the resource and is agreed upon by administrators and users, or across sites and applications. These recommendations should emerge from questions or discussions opened by site administrators or application developers, at any stage, including installation, development, usage, or adaptation for another HPC centre.

Provided descriptions should describe general or site specific aspects of resource installation, configuration and usage, or describe the guidelines or convention for deploying or using the resource within the local (user/site) or temporary environment (job). Examples are:

  • Common configuration settings of execution environment
  • Filesystem path or local access string
  • Environment variables to be set or used by applications
  • Options (e.g. additional modules) that are needed or required by applications and should be present
  • Minimum quantitative values (e.g. quotas) offered by the site
  • Location and format of some configuration or usage hint instructing applications on proper use of the resource or site specific policy
  • Key installation or configuration settings that should be set to a common value, or locally tweaked by local site admins
  • Conventions for application or job bound installation and usage of the resource