ScaLAPACK
From HP-SEE Wiki
Contents |
- Web site: http://www.netlib.org/scalapack/
- Described version: 1.8.0
- Licensing: e.g. BSD-new license: http://www.netlib.org/scalapack/LICENSE
- User documentation: http://www.netlib.org/scalapack/slug/index.html http://www.netlib.org/scalapack/slug/node182.html#SECTION041100000000000000000 http://icl.cs.utk.edu/lapack-forum/
- Download: http://www.netlib.org/scalapack/
- Source code: http://www.netlib.org/scalapack/scalapack.tgz
Authors/Maintainers
Summary
ScaLAPACK is a library of high-performance linear algebra routines for distributedmemory message-passing MIMD computers and networks of workstations supporting PVM or MPI. It is a continuation of the LAPACK project, which designed and produced analogous software for workstations, vector supercomputers, and shared-memory parallel computers. Both libraries contain routines for solving systems of linear equations, least squares problems, and eigenvalue problems. The goals of both projects are efficiency (to run as fast as possible), scalability (as the problem size and number of processors grow), reliability (including error bounds), portability (across all important parallel machines), flexibility (so users can construct new routines from well-designed parts), and ease of use (by making the interface to LAPACK and ScaLAPACK look as similar as possible). Many of these goals, particularly portability, are aided by developing and promoting standards, especially for low-level communication and computation routines. We have been successful in attaining these goals, limiting most machine dependencies to two standard libraries called the BLAS, or Basic Linear Algebra Subprograms, and BLACS, or Basic Linear Algebra Communication Subprograms. LAPACK will run on any machine where the BLAS are available, and ScaLAPACK will run on any machine where both the BLAS and the BLACS are available. Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy. The fundamental building blocks of the ScaLAPACK library are distributed memory versions (PBLAS) of the Level 1, 2 and 3 BLAS, and a set of Basic Linear Algebra Communication Subprograms (BLACS) for communication tasks that arise frequently in parallel linear algebra computations. The library is currently written in Fortran 77 (with the exception of a few symmetric eigenproblem auxiliary routines written in C to exploit IEEE arithmetic) in a Single Program Multiple Data (SPMD) style using explicit message passing for interprocessor communication. The name ScaLAPACK is an acronym for Scalable Linear Algebra PACKage, or Scalable LAPACK. ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
Features
http://www.netlib.org/scalapack/slug/node159.html#SECTION04800000000000000000
Architectural/Functional Overview
http://www.netlib.org/scalapack/slug/node104.html#SECTION04500000000000000000 http://icl.cs.utk.edu/lapack-forum/viewforum.php?f=6&sid=2f9d2b65f8138bbd5d38d35f988e519b
Usage Overview
http://www.netlib.org/scalapack/slug/node179.html#SECTION041000000000000000000
Dependacies
HP-SEE Applications
- HMLQCD (Hadron Masses from Lattice QCD)
- NUQG (Numerical study of ultra-cold quantum gases)
- GENETATOMIC (Genetic algorithms in atomic collisions)
Resource Centers
- BG, BG
- HPCG, BG
- NCIT-Cluster, RO
- NIIFI SC, HU
- PARADOX, RS
Usage by Other Projects and Communities
- If any
Recommendations for Configuration and Usage
http://icl.cs.utk.edu/lapack-forum/viewforum.php?f=4&sid=2f9d2b65f8138bbd5d38d35f988e519b