Parallel Performance of Linear Solvers and Preconditioners

Report No. ARL-TR-6778
Authors: Joshua C. Crone; Lynn B. Munday
Date/Pages: January 2014; 24 pages
Abstract: In this report we examine the performance of parallel linear solvers and preconditioners available in the Hypre, PETSc, and MUMPS libraries to identify the combination with the shortest wall clock time for large-scale linear systems. The linear system of equations in this work is produced by a finite element code solving a linear elastic boundary value problem (BVP). The boundary conditions for the linear elastic BVP are produced by a discrete dislocation dynamics (DDD) simulation and change with each timestep of the DDD simulation as the dislocation structure evolves. However, the coefficient—or stiffness matrix—remains constant during the DDD simulation and some expensive matrix factorizations only occur once during initialization. Our results show that for system sizes of less than three million degrees of freedom (DOF), the MUMPS direct solver is 20× faster than the best iterative solvers per timestep, but has a large upfront cost for the LU decomposition. Systems larger than three million DOFs require iterative solvers. The Hypre algebraic multigrid (AMG) preconditioner packaged was the best performing iterative solver, but was found to be sensitive to the AMG parameters. The PETSc Block Jacobi preconditioner showed good performance with the default preconditioning setting.
Distribution: Approved for public release
  Download Report ( 2.004 MBytes )
If you are visually impaired or need a physical copy of this report, please visit and contact DTIC.
 

Last Update / Reviewed: January 1, 2014