NAMD

From HP-SEE Wiki

Revision as of 20:12, 26 April 2012 by Mid (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents


Authors/Maintainers

  • Also origin, if the software comes from a specific project.

Summary

NAMD is a parallel classical molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms and tens of processors on commodity clusters using gigabit Ethernet. NAMD uses a specifically designed molecular graphics program VMD for simulation setup and analysis of MD trajectories that it generates, but is also file-compatible with AMBER, CHARMM, and X-PLOR. It is available as precompiled binaries for many platforms, including BlueGene/P and Origin2000. Using the source code, it could be built on any platform supporting MPI or Ethernet. Supports simulation varying from basic ones like constant temperature via rescaling, coupling, or Langevin dynamics, constant pressure via Berendsen or Langevin Nose-Hoover methods, particle mesh Ewald full electrostatics for periodic systems, symplectic multiple time step integration. It also enables performing of alchemical free energy calculations by gradual mutation of a subset of atoms of the studied systems from one state to another, using either the free energy perturbation or thermodynamics integration approach. Conformational free energy calculations are also possible within the collective variables module of the code. NAMD is implemented using the Converse runtime system, and the major components of NAMD are written in charm++. Converse provides machine-independent interface to all popular parallel computers as well as workstation clusters. Converse also implements a data-driven execution model, allowing parallel languages such as charm++ to support the dynamic behaviour of NAMD's chunk-based decomposition scheme. The dynamic components of NAMD are implemented in the charm++ parallel language. It is composed of collections of C++ objects, which communicate by remotely invoking methods on other objects. This supports the multi-partition decompositions in NAMD. Also data-driven execution adaptively overlaps communication and computation. Finally, NAMD benefits from charm++'s load balancing framework to achieve unsurpassed parallel performance. The largest simulation to date performed with NAMD is over 300,000 atoms on 1000 processors.

Features

Architectural/Functional Overview

Usage Overview

In the context of sequential statistical physics – quantum mechanical studies of complex condensed matter systems, NAMD is used to generate classical molecular dynamics trajectory for the system of interest. This trajectory is later on analyzed with time-series analytic tools and appropriately chosen snapshots are subsequently subjected to a more in-depth analysis by application of more advanced an rigorous quantum mechanical methodologies.

On a multicore Linux machine:

namd2 +p<procs> <configfile>

On a multiprocessor workstation, with the aid of the charmrun program (the ++local option is used to specify that only the local machine should be used):

charmrun namd2 ++local +p<procs> <configfile>

Dependacies

The following files are needed to run an NAMD simulation:

  • Force field parameters (in either CHARMM or X-PLOR format) http://mackerell.umaryland.edu/CHARMM_ff_params.html
  • A PSF file (in X-PLOR format) describing the molecular structure
  • A PDB file with initial coordinates of the studied molecular system
  • A configuration file

HP-SEE Applications

  • CompChem
  • ISyMAB
  • MDSCS
  • HC-HC-MD-QM-CS

Resource Centers

  • BG/P, BG
  • HPCG, BG
  • IFIN_Bio, RO
  • NCIT-Cluster, RO
  • IPB, RS

Usage by Other Projects and Communities

  • If any

Recommendations for Configuration and Usage

Compilation of NAMD from scratch, i.e. building the complete binary “from source”, besides the C and C++ compilers, also requires certain libraries, such as Charm++/Converse, TCL and FFTW. While compilation without TCL or FFTW is possible, some of the program features will be disabled. The Charm++/Converse library is included in the source distribution of NAMD, while precompiled TCL and FFTW libraries are available from http://www.ks.uiuc.edu/Research/namd/libraries/. A complete NAMD building sequence consists of the following steps: Unpacking of NAMD source code. E.g. for the 2.9b3 release:

tar xzf NAMD_2.9b3_Source.tar.gz

Unpacking the corresponding Charm++/Converse library:

tar xf charm-6.4.0.tar

Entering the charm directory:

cd charm-6.4.0

Building (and subsequent testing) of the Charm++/Converse library (e.g. the MPI version):

env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 –with-production

Changing to the test directory:

cd mpi-linux-x86_64/tests/charm++/megatest

Compilation of pgm

make pgm

Running the test:

mpirun –n X ./pgm

Downloading and installation of the TCL and FFTW libraries

cd NAMD_2.9b3_Source
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz
tar xzf fftw-linux-x86_64.tar.gz
mv linux-x86_64 fftw
wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz
tar xzf tcl8.5.9-linux-x86_64.tar.gz
mv tcl8.5.9-linux-x86_64 tcl

If the directories charm-6.4.0, fftw and tcl are subdirectories of NAMD_2.9b3_Source, then essentially no editing of various configuration files is needed. If this is not the case, however, then the following files need to be edited:

vi Make.charm (CHARMBASE should be set to point to full path to charm)
vi arch/Linux-x86_64.fftw (the library name and path to files should be set properly)
vi arch/Linux-x86_64.tcl (the library name/version and path to TCL files should be set properly)

Setting up the build directory and compiling (e.g. the MPI version):

./config Linux-x86_64-g++ --charm-arch mpi-linux-x86_64
cd Linux-x86_64-g++
make
Personal tools