Improving parallelism of scientific and engineering applications on heterogeneous supercomputers
Author
Diamond, GerrettOther Contributors
Shephard, M. S. (Mark S.); Slota, George M.; Cutler, Barbara M.; Sahni, Onkar; Smith, Cameron W.;Date Issued
2021-08Subject
Computer scienceDegree
PhD;Terms of Use
This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute (RPI), Troy, NY. Copyright of original work retained by author.; Attribution-NonCommercial-NoDerivs 3.0 United StatesMetadata
Show full item recordAbstract
The rising usage of heterogeneous supercomputers introduces both opportunities for increased parallelism and challenges for efficient usage of the available hardware. Applications running on heterogeneous supercomputers must adopt new methods to achieve performance across two levels of parallelism. Inter-process parallelism defines coordination between processes and intra-process parallelism within each process. This thesis presents research towards improving inter-process and intra-process parallelism for applications that use complex data structures such as distributed unstructured meshes. Inter-process parallelism is defined by the coupled costs of the partition of load between processes and the communications between processes required as a result of the partition. To achieve optimal performance, partitions must divide computational load evenly between processes while minimizing the additional costs of communications. This thesis addresses improving inter-process parallelism using multicriteria partition improvement multicriteria methods on a generalized structure for a broad set of potential applications. The partition improvement methods are applied to different unstructured mesh setups with partitions up to half a million processes. In the case of heterogeneous supercomputers, intra-process parallelism is dictated by the parallel hardware available to each process for performing computations. For most of the current and next generation US systems, Graphic Processing Units (GPUs) are the parallel hardware available on each node. This thesis addresses methods for intra-process parallelism in the scope of particle-in-cell simulations with a novel approach to the storage of the unstructured mesh and the particles for optimized performance on GPUs while utilizing performance-portable methods for performance on future hardware. Scaling studies of these methods are presented up to 4096 nodes of the Summit supercomputer with over a trillion particles simulated.;Description
August 2021; School of ScienceDepartment
Dept. of Computer Science;Publisher
Rensselaer Polytechnic Institute, Troy, NYRelationships
Rensselaer Theses and Dissertations Online Collection;Access
CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 license. No commercial use or derivatives are permitted without the explicit approval of the author.;Collections
Except where otherwise noted, this item's license is described as CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons
Attribution-Noncommercial-No Derivative Works 3.0 license. No commercial use or derivatives
are permitted without the explicit approval of the author.