![]() ![]() Time Integration Errors and Energy Conservation Properties of the Stormer Verlet Method Applied to MPM M. Berzins. In Proceedings of VI International Conference on Particle-based Methods – Fundamentals and Applications, Barcelona, Edited by E. O ̃ nate, M. Bischoff, D.R.J. Owen, P. Wriggers & T. Zohdi, PARTICLES 2019, pp. 555-566. October, 2019. ISBN: 978-84-121101-1-1 The success of the Material Point Method (MPM) in solving many challenging problems nevertheless raises some open questions regarding the fundamental properties of the method such as the energy conservation since being addressed by Bardenhagen and by Love and Sulsky. Similarly while low order symplectic time integration techniques are used with MPM, higher order methods have not been used. For this reason the Stormer Verlet method, a popular and widely-used symplectic method is applied to MPM. Both the time integration error and the energy conservation properties of this method applied to MPM are considered. The method is shown to have locally third order accuracy of energy conservation in time. This is in contrast to the locally second order accuracy in energy conservation of the methods that are used in many MPM calculations. This third accuracy accuracy is demonstrated both locally and globally on a standard MPM test example. |
![]() ![]() An Evaluation of An Asynchronous Task Based Dataflow Approach For Uintah A. Humphrey, M. Berzins. In 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Vol. 2, pp. 652-657. July, 2019. ISSN: 0730-3157 DOI: 10.1109/COMPSAC.2019.10282 The challenge of running complex physics code on the largest computers available has led to dataflow paradigms being explored. While such approaches are often applied at smaller scales, the challenge of extreme-scale data flow computing remains. The Uintah dataflow framework has consistently used dataflow computing at the largest scales on complex physics applications. At present Uintah contains two main dataflow models. Both are based upon asynchronous communication. One uses a static graph-based approach with asynchronous communication and the other uses a more dynamic approach that was introduced almost a decade ago. Subsequent changes within the Uintah runtime system combined with many more large scale experiments, has necessitated a reevaluation of these two approaches, comparing them in the context of large scale problems. While the static approach has worked well for some large-scale simulations, the dynamic approach is seen to offer performance improvements over the static case for a challenging fluid-structure interaction problem at large scale that involves fluid flow and a moving solid represented using particle method on an adaptive mesh. |
![]() ![]() Node failure resiliency for Uintah without checkpointing D. Sahasrabudhe, M. Berzins, J. Schmidt. In Concurrency and Computation: Practice and Experience, pp. e5340. 2019. DOI: doi:10.1002/cpe.5340 The frequency of failures in upcoming exascale supercomputers may well be greater than at present due to many-core architectures if component failure rates remain unchanged. This potential increase in failure frequency coupled with I/O challenges at exascale may prove problematic for current resiliency approaches such as checkpoint restarting, although the use of fast intermediate memory may help. Algorithm-Based Fault Tolerance (ABFT) using Adaptive Mesh Refinement (AMR) is one resiliency approach used to address these challenges. For adaptive mesh codes, a coarse mesh version of the solution may be used to restore the fine mesh solution. This paper addresses the implementation of the ABFT approach within the Uintah software framework: both at a software level within Uintah and in the data reconstruction method used for the recovery of lost data. This method has two problems: inaccuracies introduced during the reconstruction propagate forward in time, and the physical consistency of variables such as positivity or boundedness may be violated during interpolation. These challenges can be addressed by the combination of two techniques: 1. a fault-tolerant MPI implementation to recover from runtime node failures, and 2. high-order interpolation schemes to preserve the physical solution and reconstruct lost data. The approach considered here uses a "Limited Essentially Non-Oscillatory" (LENO) scheme along with AMR to rebuild the lost data without checkpointing using Uintah. Experiments were carried out using a fault-tolerant MPI - ULFM to recover from runtime failure, and LENO to recover data on patches belonging to failed ranks, while the simulation was continued to the end. Results show that this ABFT approach is up to 10x faster than the traditional checkpointing method. The new interpolation approach is more accurate than linear interpolation and not subject to the overshoots found in other interpolation methods. |
![]() ![]() Shared-Memory Parallel Computation of Morse-Smale Complexes with Improved Accuracy A. Gyulassy, P.-T. Bremer, V. Pascucci. In IEEE Transactions on Visualization and Computer Graphics, Vol. 25, No. 1, IEEE, pp. 1183--1192. Jan, 2019. DOI: 10.1109/tvcg.2018.2864848 Topological techniques have proven to be a powerful tool in the analysis and visualization of large-scale scientific data. In particular, the Morse-Smale complex and its various components provide a rich framework for robust feature definition and computation. Consequently, there now exist a number of approaches to compute Morse-Smale complexes for large-scale data in parallel. However, existing techniques are based on discrete concepts which produce the correct topological structure but are known to introduce grid artifacts in the resulting geometry. Here, we present a new approach that combines parallel streamline computation with combinatorial methods to construct a high-quality discrete Morse-Smale complex. In addition to being invariant to the orientation of the underlying grid, this algorithm allows users to selectively build a subset of features using high-quality geometry. In particular, a user may specifically select which ascending/descending manifolds are reconstructed with improved accuracy, focusing computational effort where it matters for subsequent analysis. This approach computes Morse-Smale complexes for larger data than previously feasible with significant speedups. We demonstrate and validate our approach using several examples from a variety of different scientific domains, and evaluate the performance of our method. |
![]() ![]() Performance Optimization Strategies for WRF Physics Schemes Used in Weather Modeling T.A.J, Ouermi, R. M. Kirby,, M. Berzins. In International Journal of Networking and Computing, Vol. 8, No. 2, IJNC , pp. 301--327. 2018. DOI: 10.15803/ijnc.8.2_301 Performance optimization in the petascale era and beyond in the exascale era has and will require modifications of legacy codes to take advantage of new architectures with large core counts and SIMD units. The Numerical Weather Prediction (NWP) physics codes considered here are optimized using thread-local structures of arrays (SOA). High-level and low-level optimization strategies are applied to the WRF Single-Moment 6-Class Microphysics Scheme (WSM6) and Global Forecast System (GFS) physics codes used in the NEPTUNE forecast code. By building on previous work optimizing WSM6 on the Intel Knights Landing (KNL), it is shown how to further optimize WMS6 and GFS physics, and GFS radiation on Intel KNL, Haswell, and potentially on future micro-architectures with many cores and SIMD vector units. The optimization techniques used herein employ thread-local structures of arrays (SOA), an OpenMP directive, OMP SIMD, and minor code transformations to enable better utilization of SIMD units, increase parallelism, improve locality, and reduce memory traffic. The optimized versions of WSM6, GFS physics, GFS radiation run 70, 27, and 23 faster (respectively) on KNL and 26, 18 and 30 faster (respectively) on Haswell than their respective original serial versions. Although this work targets WRF physics schemes, the findings are transferable to other performance optimization contexts and provide insight into the optimization of codes with complex physical models for present and near-future architectures with many core and vector units. |
![]() ![]() Automatic Halo Management for the Uintah GPU-Heterogeneous Asynchronous Many-Task Runtime B. Peterson, A. Humphrey, D. Sunderland, J. Sutherland, T. Saad, H. Dasari, M. Berzins. In International Journal of Parallel Programming, Dec, 2018. ISSN: 1573-7640 DOI: 10.1007/s10766-018-0619-1 The Uintah computational framework is used for the parallel solution of partial differential equations on adaptive mesh refinement grids using modern supercomputers. Uintah is structured with an application layer and a separate runtime system. Uintah is based on a distributed directed acyclic graph (DAG) of computational tasks, with a task scheduler that efficiently schedules and executes these tasks on both CPU cores and on-node accelerators. The runtime system identifies task dependencies, creates a task graph prior to the execution of these tasks, automatically generates MPI message tags, and automatically performs halo transfers for simulation variables. Automating halo transfers in a heterogeneous environment poses significant challenges when tasks compute within a few milliseconds, as runtime overhead affects wall time execution, or when simulation variables require large halos spanning most or all of the computational domain, as task dependencies become expensive to process. These challenges are magnified at production scale when application developers require each compute node perform thousands of different halo transfers among thousands simulation variables. The principal contribution of this work is to (1) identify and address inefficiencies that arise when mapping tasks onto the GPU in the presence of automated halo transfers, (2) implement new schemes to reduce runtime system overhead, (3) minimize application developer involvement with the runtime, and (4) show overhead reduction results from these improvements. |
![]() ![]() Demonstrating GPU Code Portability and Scalability for Radiative Heat Transfer Computations B. Peterson, A. Humphrey, J. Holmen T. Harman, M. Berzins, D. Sunderland, H.C. Edwards. In Journal of Computational Science, Elsevier BV, June, 2018. ISSN: 1877-7503 DOI: 10.1016/j.jocs.2018.06.005 High performance computing frameworks utilizing CPUs, Nvidia GPUs, and/or Intel Xeon Phis necessitate portable and scalable solutions for application developers. Nvidia GPUs in particular present numerous portability challenges with a different programming model, additional memory hierarchies, and partitioned execution units among streaming multiprocessors. This work presents modifications to the Uintah asynchronous many-task runtime and the Kokkos portability library to enable one single codebase for complex multiphysics applications to run across different architectures. Scalability and performance results are shown on multiple architectures for a globally coupled radiation heat transfer simulation, ranging from a single node to 16384 Titan compute nodes. |
![]() ![]() On the treatment of field quantities and elemental continuity in fem solutions A. Jallepalli, J. Docampo-Sánchez, J.K. Ryan, R. Haimes, R.M. Kirby. In IEEE Transactions on Visualization and Computer Graphics, Vol. 24, No. 1, IEEE, pp. 903--912. Jan, 2018. DOI: 10.1109/tvcg.2017.2744058 As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only C0 continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous. |
![]() ![]() Weighted approximate fekete points: sampling for least-squares polynomial approximation L. Guo, A. Narayan, L. Yan, T. Zhou. In SIAM Journal on Scientific Computing, Vol. 40, No. 1, SIAM, pp. A366--A387. Jan, 2018. DOI: 10.1137/17m1140960 We propose and analyze a weighted greedy scheme for computing deterministic sample configurations in multidimensional space for performing least-squares polynomial approximations on $L^2$ spaces weighted by a probability density function. Our procedure is a particular weighted version of the approximate Fekete points method, with the weight function chosen as the (inverse) Christoffel function. Our procedure has theoretical advantages: when linear systems with optimal condition number exist, the procedure finds them. In the one-dimensional setting with any density function, our greedy procedure almost always generates optimally conditioned linear systems. Our method also has practical advantages: our procedure is impartial to the compactness of the domain of approximation and uses only pivoted linear algebraic routines. We show through numerous examples that our sampling design outperforms competing randomized and deterministic designs when the domain is both low and high dimensional. |
![]() Spectral Element and hp Methods, Y. Yu, R.M. Kirby, G.E. Karniadakis. In Encyclopedia of Computational Mechanics Second Edition, John Wiley & Sons, Ltd, pp. 1--43. 2018. Spectral/hp element methods provide high‐order discretization, which is essential in the longtime integration of advection–diffusion systems and for capturing dynamic instabilities in solids. In this chapter, we review the main formulations for simulations of incompressible and compressible viscous flows as well as for solid mechanics and present several examples with some emphasis on fluid–structure interactions and interfaces. The first generation of (nodal) spectral elements was limited to relatively simple geometries and smooth solutions. However, the new generation of spectral/hp elements, consisting of both nodal and modal forms, can handle very complex geometries using unstructured grids and can capture strong shocks by employing discontinuous Galerkin methods. New flexible formulations allow simulations of multiphysics problems including extremely complex geometries and multiphase flows. Several implementation strategies have also been developed on the basis of multilevel parallel algorithms that allow dynamic p ‐refinement at constant wall clock time. After three decades of intense developments, spectral element and hp methods are mature and efficient to be used effectively in applications of industrial complexity. They provide the capabilities that standard finite element and finite volume methods do, but, in addition, they exhibit high‐order accuracy and error control. |
![]() ![]() Curvilinear Mesh Adaptation Using Radial Basis Function Interpolation and Smoothing V. Zala, V. Shankar, S.P. Sastry, R.M. Kirby. In Journal of Scientific Computing, Springer Nature, pp. 1--22. April, 2018. DOI: 10.1007/s10915-018-0711-0 We present a new iterative technique based on radial basis function (RBF) interpolation and smoothing for the generation and smoothing of curvilinear meshes from straight-sided or other curvilinear meshes. Our technique approximates the coordinate deformation maps in both the interior and boundary of the curvilinear output mesh by using only scattered nodes on the boundary of the input mesh as data sites in an interpolation problem. Our technique produces high-quality meshes in the deformed domain even when the deformation maps are singular due to a new iterative algorithm based on modification of the RBF shape parameter. Due to the use of RBF interpolation, our technique is applicable to both 2D and 3D curvilinear mesh generation without significant modification. |
![]() ![]() Flexible Live‐Wire: Image Segmentation with Floating Anchors B. Summa, N. Faraj, C. Licorish, V. Pascucci. In Computer Graphics Forum, Vol. 37, No. 2, Wiley, pp. 321-328. May, 2018. DOI: 10.1111/cgf.13364 We introduce Flexible Live‐Wire, a generalization of the Live‐Wire interactive segmentation technique with floating anchors. In our approach, the user input for Live‐Wire is no longer limited to the setting of pixel‐level anchor nodes, but can use more general anchor sets. These sets can be of any dimension, size, or connectedness. The generality of the approach allows the design of a number of user interactions while providing the same functionality as the traditional Live‐Wire. In particular, we experiment with this new flexibility by designing four novel Live‐Wire interactions based on specific primitives: paint, pinch, probable, and pick anchors. These interactions are only a subset of the possibilities enabled by our generalization. Moreover, we discuss the computational aspects of this approach and provide practical solutions to alleviate any additional overhead. Finally, we illustrate our approach and new interactions through several example segmentations. |
![]() ![]() Uncertainty quantification guided robust design for nanoparticles' morphology Y. He, M. Razi, C. Forestiere, L. Dal Negro, R.M. Kirby. In Computer Methods in Applied Mechanics and Engineering, Elsevier BV, pp. 578--593. July, 2018. DOI: 10.1016/j.cma.2018.03.027 The automatic inverse design of three-dimensional plasmonic nanoparticles enables scientists and engineers to explore a wide design space and to maximize a device's performance. However, due to the large uncertainty in the nanofabrication process, we may not be able to obtain a deterministic value of the objective, and the objective may vary dramatically with respect to a small variation in uncertain parameters. Therefore, we take into account the uncertainty in simulations and adopt a classical robust design model for a robust design. In addition, we propose an efficient numerical procedure for the robust design to reduce the computational cost of the process caused by the consideration of the uncertainty. Specifically, we use a global sensitivity analysis method to identify the important random variables and consider the non-important ones as deterministic, and consequently reduce the dimension of the stochastic space. In addition, we apply the generalized polynomial chaos expansion method for constructing computationally cheaper surrogate models to approximate and replace the full simulations. This efficient robust design procedure is performed by varying the particles' material among the most commonly used plasmonic materials such as gold, silver, and aluminum, to obtain different robust optimal shapes for the best enhancement of electric fields. |