NAMD

From HP-SEE Wiki

(Difference between revisions)
Jump to: navigation, search
(Recommendations for Configuration and Usage)
Line 70: Line 70:
== Recommendations for Configuration and Usage ==
== Recommendations for Configuration and Usage ==
-
 
-
Please describe here any common settings, configurations or conventions that would make the usage of this resource (library or tool) more interoperable or scalable across the HP-SEE resources. These recommendations should include anything that is related to the resource and is agreed upon by administrators and users, or across sites and applications. These recommendations should emerge from questions or discussions opened by site administrators or application developers, at any stage, including installation, development, usage, or adaptation for another HPC centre.
 
-
 
-
Provided descriptions should describe general or site specific aspects of resource installation, configuration and usage, or describe the guidelines or convention for deploying or using the resource within the local (user/site) or temporary environment (job). Examples are:
 
-
 
-
* Common configuration settings of execution environment
 
-
* Filesystem path or local access string
 
-
* Environment variables to be set or used by applications
 
-
* Options (e.g. additional modules) that are needed or required by applications and should be present
 
-
* Minimum quantitative values (e.g. quotas) offered by the site
 
-
* Location and format of some configuration or usage hint instructing applications on proper use of the resource or site specific policy
 
-
* Key installation or configuration settings that should be set to a common value, or locally tweaked by local site admins
 
-
* Conventions for application or job bound installation and usage of the resource
 

Revision as of 08:06, 5 September 2011

Contents


Authors/Maintainers

  • Also origin, if the software comes from a specific project.

Summary

NAMD is a parallel classical molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on charm++ parallel objects, NAMD scales to hundreds of processors on high-end parallel platforms and tens of processors on commodity clusters using gigabit Ethernet. NAMD uses a specifically designed molecular graphics program VMD for simulation setup and analysis of MD trajectories that it generates, but is also file-compatible with AMBER, CHARMM, and X-PLOR. It is available as precompiled binaries for many platforms, including BlueGene/P and Origin2000. Using the source code, it could be built on any platform supporting MPI or Ethernet. Supports simulation varying from basic ones like constant temperature via rescaling, coupling, or Langevin dynamics, constant pressure via Berendsen or Langevin Nose-Hoover methods, particle mesh Ewald full electrostatics for periodic systems, symplectic multiple time step integration. It also enables performing of alchemical free energy calculations by gradual mutation of a subset of atoms of the studied systems from one state to another, using either the free energy perturbation or thermodynamics integration approach. Conformational free energy calculations are also possible within the collective variables module of the code. NAMD is implemented using the Converse runtime system, and the major components of NAMD are written in charm++. Converse provides machine-independent interface to all popular parallel computers as well as workstation clusters. Converse also implements a data-driven execution model, allowing parallel languages such as charm++ to support the dynamic behaviour of NAMD's chunk-based decomposition scheme. The dynamic components of NAMD are implemented in the charm++ parallel language. It is composed of collections of C++ objects, which communicate by remotely invoking methods on other objects. This supports the multi-partition decompositions in NAMD. Also data-driven execution adaptively overlaps communication and computation. Finally, NAMD benefits from charm++'s load balancing framework to achieve unsurpassed parallel performance. The largest simulation to date performed with NAMD is over 300,000 atoms on 1000 processors.

Features

  • Listed features

Architectural/Functional Overview

  • high level design info, how it works, performance - may be a link, or several links

Usage Overview

  • If possible with small example - may be a link

Dependacies

  • list of all relevant dependencies on other libraries

HP-SEE Applications

  • CompChem
  • ISyMAB
  • MDSCS
  • HC-HC-MD-QM-CS

Resource Centers

  • BG, BG
  • HPCG, BG
  • IFIN_Bio, RO
  • NCIT-Cluster, RO

Usage by Other Projects and Communities

  • If any

Recommendations for Configuration and Usage

Personal tools