Center for Integrative Biomedical Computing

SCI Publications


J. Adams, S. Elhabian. “Fully Bayesian VIB-DeepSSM,” Subtitled “arXiv:2305.05797,” 2023.


Statistical shape modeling (SSM) enables population-based quantitative analysis of anatomical shapes, informing clinical diagnosis. Deep learning approaches predict correspondence-based SSM directly from unsegmented 3D images but require calibrated uncertainty quantification, motivating Bayesian formulations. Variational information bottleneck DeepSSM (VIB-DeepSSM) is an effective, principled framework for predicting probabilistic shapes of anatomy from images with aleatoric uncertainty quantification. However, VIB is only half-Bayesian and lacks epistemic uncertainty inference. We derive a fully Bayesian VIB formulation from both the probably approximately correct (PAC)-Bayes and variational inference perspectives. We demonstrate the efficacy of two scalable approaches for Bayesian VIB with epistemic uncertainty: concrete dropout and batch ensemble. Additionally, we introduce a novel combination of the two that further enhances uncertainty calibration via multimodal marginalization. Experiments on synthetic shapes and left atrium data demonstrate that the fully Bayesian VIB network predicts SSM from images with improved uncertainty reasoning without sacrificing accuracy.

J. Adams, S. Elhabian. “Can point cloud networks learn statistical shape models of anatomies?,” Subtitled “arXiv:2305.05610,” 2023.


Statistical Shape Modeling (SSM) is a valuable tool for investigating and quantifying anatomical variations within populations of anatomies. However, traditional correspondence-based SSM generation methods require a time-consuming re-optimization process each time a new subject is added to the cohort, making the inference process prohibitive for clinical research. Additionally, they require complete geometric proxies (e.g., high-resolution binary volumes or surface meshes) as input shapes to construct the SSM. Unordered 3D point cloud representations of shapes are more easily acquired from various medical imaging practices (e.g., thresholded images and surface scanning). Point cloud deep networks have recently achieved remarkable success in learning permutation-invariant features for different point cloud tasks (e.g., completion, semantic segmentation, classification). However, their application to learning SSM from point clouds is to-date unexplored. In this work, we demonstrate that existing point cloud encoder-decoder-based completion networks can provide an untapped potential for SSM, capturing population-level statistical representations of shapes while reducing the inference burden and relaxing the input requirement. We discuss the limitations of these techniques to the SSM application and suggest future improvements. Our work paves the way for further exploration of point cloud deep learning for SSM, a promising avenue for advancing shape analysis literature and broadening SSM to diverse use cases.

J. Adams, S. Elhabian. “Point2SSM: Learning Morphological Variations of Anatomies from Point Cloud,” Subtitled “arXiv:2305.14486,” 2023.


We introduce Point2SSM, a novel unsupervised learning approach that can accurately construct correspondence-based statistical shape models (SSMs) of anatomy directly from point clouds. SSMs are crucial in clinical research for analyzing the population-level morphological variation in bones and organs. However, traditional methods for creating SSMs have limitations that hinder their widespread adoption, such as the need for noise-free surface meshes or binary volumes, reliance on assumptions or predefined templates, and simultaneous optimization of the entire cohort leading to lengthy inference times given new data. Point2SSM overcomes these barriers by providing a data-driven solution that infers SSMs directly from raw point clouds, reducing inference burdens and increasing applicability as point clouds are more easily acquired. Deep learning on 3D point clouds has seen recent success in unsupervised representation learning, point-to-point matching, and shape correspondence; however, their application to constructing SSMs of anatomies is largely unexplored. In this work, we benchmark state-of-the-art point cloud deep networks on the task of SSM and demonstrate that they are not robust to the challenges of anatomical SSM, such as noisy, sparse, or incomplete input and significantly limited training data. Point2SSM addresses these challenges via an attention-based module that provides correspondence mappings from learned point features. We demonstrate that the proposed method significantly outperforms existing networks in terms of both accurate surface sampling and correspondence, better capturing population-level statistics.

J. Adams, S.Y. Elhabian. “Benchmarking Scalable Epistemic Uncertainty Quantification in Organ Segmentation,” Subtitled “arXiv:2308.07506,” 2023.


Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning. However, quantifying and understanding the uncertainty associated with model predictions is crucial in critical clinical applications. While many techniques have been proposed for epistemic or model-based uncertainty estimation, it is unclear which method is preferred in the medical image analysis setting. This paper presents a comprehensive benchmarking study that evaluates epistemic uncertainty quantification methods in organ segmentation in terms of accuracy, uncertainty calibration, and scalability. We provide a comprehensive discussion of the strengths, weaknesses, and out-of-distribution detection capabilities of each method as well as recommendations for future improvements. These findings contribute to the development of reliable and robust models that yield accurate segmentations while effectively quantifying epistemic uncertainty.

D. Akbaba, D. Lange, M. Correll, A. Lex, M. Meyer. “Troubling Collaboration: Matters of Care for Visualization Design Study,” In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23),, pp. 23--28. April, 2023.


A common research process in visualization is for visualization researchers to collaborate with domain experts to solve particular applied data problems. While there is existing guidance and expertise around how to structure collaborations to strengthen research contributions, there is comparatively little guidance on how to navigate the implications of, and power produced through the socio-technical entanglements of collaborations. In this paper, we qualitatively analyze refective interviews of past participants of collaborations from multiple perspectives: visualization graduate students, visualization professors, and domain collaborators. We juxtapose the perspectives of these individuals, revealing tensions about the tools that are built and the relationships that are formed — a complex web of competing motivations. Through the lens of matters of care, we interpret this web, concluding with considerations that both trouble and necessitate reformation of current patterns around collaborative work in visualization design studies to promote more equitable, useful, and care-ful outcomes.

M. Aliakbari, M.S. Sadrabadi, P. Vadasz, A. Arzani. “Ensemble physics informed neural networks: A framework to improve inverse transport modeling in heterogeneous domains,” In Physics of Fluids, AIP, 2023.


Modeling fluid flow and transport in heterogeneous systems is often challenged by unknown parameters that vary in space. In inverse
modeling, measurement data are used to estimate these parameters. Due to the spatial variability of these unknown parameters in
heterogeneous systems (e.g., permeability or diffusivity), the inverse problem is ill-posed and infinite solutions are possible. Physics-informed
neural networks (PINN) have become a popular approach for solving inverse problems. However, in inverse problems in heterogeneous sys-
tems, PINN can be sensitive to hyperparameters and can produce unrealistic patterns. Motivated by the concept of ensemble learning and
variance reduction in machine learning, we propose an ensemble PINN (ePINN) approach where an ensemble of parallel neural networks is
used and each sub-network is initialized with a meaningful pattern of the unknown parameter. Subsequently, these parallel networks provide
a basis that is fed into a main neural network that is trained using PINN. It is shown that an appropriately selected set of patterns can guide
PINN in producing more realistic results that are relevant to the problem of interest. To assess the accuracy of this approach, inverse trans-
port problems involving unknown heat conductivity, porous media permeability, and velocity vector fields were studied. The proposed
ePINN approach was shown to increase the accuracy in inverse problems and mitigate the challenges associated with non-uniqueness.

A. Arzani, L. Yuan, P. Newell, B. Wang. “Interpreting and generalizing deep learning in physics-based problems with functional linear models,” Subtitled “arXiv:2307.04569,” 2023.


Although deep learning has achieved remarkable success in various scientific machine learning applications, its black-box nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretability in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning.

T. M. Athawale, C.R. Johnson, S. Sane,, D. Pugmire. “Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models,” In IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 1, IEEE, pp. 613-23. 2023.


Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green’s theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets

J. Baker, E. Cherkaev, A. Narayan, B. Wang. “Learning Proper Orthogonal Decomposition of Complex Dynamics Using Heavy-ball Neural ODEs,” In Journal of Scientific Computing, Vol. 95, No. 14, 2023.


Proper orthogonal decomposition (POD) allows reduced-order modeling of complex dynamical systems at a substantial level, while maintaining a high degree of accuracy in modeling the underlying dynamical systems. Advances in machine learning algorithms enable learning POD-based dynamics from data and making accurate and fast predictions of dynamical systems. This paper extends the recently proposed heavy-ball neural ODEs (HBNODEs) (Xia et al. NeurIPS, 2021] for learning data-driven reduced-order models (ROMs) in the POD context, in particular, for learning dynamics of time-varying coefficients generated by the POD analysis on training snapshots constructed by solving full-order models. HBNODE enjoys several practical advantages for learning POD-based ROMs with theoretical guarantees, including 1) HBNODE can learn long-range dependencies effectively from sequential observations, which is crucial for learning intrinsic patterns from sequential data, and 2) HBNODE is computationally efficient in both training and testing. We compare HBNODE with other popular ROMs on several complex dynamical systems, including the von Kármán Street flow, the Kurganov-Petrova-Popov equation, and the one-dimensional Euler equations for fluids modeling.

J.W. Beiriger, W. Tao, M.K. Bruce, E. Anstadt, C. Christiensen, J. Smetona, R. Whitaker, J. Goldstein. “CranioRate TM: An Image-Based, Deep-Phenotyping Analysis Toolset and Online Clinician Interface for Metopic Craniosynostosis,” In Plastic and Reconstructive Surgery, 2023.


The diagnosis and management of metopic craniosynostosis involves subjective decision-making at the point of care. The purpose of this work is to describe a quantitative severity metric and point-of-care user interface to aid clinicians in the management of metopic craniosynostosis and to provide a platform for future research through deep phenotyping.

Two machine-learning algorithms were developed that quantify the severity of craniosynostosis – a supervised model specific to metopic craniosynostosis (Metopic Severity Score) and an unsupervised model used for cranial morphology in general (Cranial Morphology Deviation). CT imaging from multiple institutions were compiled to establish the spectrum of severity and a point-of-care tool was developed and validated.

Over the study period (2019-2021), 254 patients with metopic craniosynostosis and 92 control patients who underwent CT scan between the ages of 6 and 18 months were included. Scans were processed using an unsupervised machine-learning based dysmorphology quantification tool, CranioRate TM. The average Metopic severity score (MSS) for normal controls was 0.0±1.0 and for metopic synostosis was 4.9±2.3 (p<0.001). The average Cranial Morphology Deviation (CMD) for normal controls was 85.2±19.2 and for metopic synostosis was 189.9±43.4 (p<0.001). A point-of-care user interface ( has processed 46 CT images from 10 institutions.

The resulting quantification of severity using MSS and CMD has shown an improved capacity, relative to conventional measures, to automatically classify normal controls versus patients with metopic synostosis. We have mathematically described, in an objective and quantifiable manner, the distribution of phenotypes in metopic craniosynostosis.

M. Berzins. “Error Estimation for the Material Point and Particle in Cell Methods,” In admos2023, 2023.


The Material Point Method (MPM) is widely used for challenging applications in engineering, and animation. The complexity of the method makes error estimation challenging. Error analysis of a simple MPM method is undertaken and the global error is shown to be first order in space and time for a widely-used variant of the method. Computational experiments illustrate the estimated accuracy.

J. A. Bergquist, B. Zenger, L. Rupp, A. Busatto, J. D. Tate, D. H. Brooks, A. Narayan, R. MacLeod. “Uncertainty quantification of the effect of cardiac position variability in the inverse problem of electrocardiographic imaging,” In Journal of Physiological Measurement, IOP Science, 2023.
DOI: 10.1088/1361-6579/acfc32


Objective:&#xD;Electrocardiographic imaging (ECGI) is a functional imaging modality that consists of two related problems, the forward problem of reconstructing body surface electrical signals given cardiac bioelectric activity, and the inverse problem of reconstructing cardiac bioelectric activity given measured body surface signals. ECGI relies on a model for how the heart generates bioelectric signals which is subject to variability in inputs. The study of how uncertainty in model inputs affects the model output is known as uncertainty quantification (UQ). This study establishes develops, and characterizes the application of UQ to ECGI.&#xD;&#xD;Approach:&#xD;We establish two formulations for applying UQ to ECGI: a polynomial chaos expansion (PCE) based parametric UQ formulation (PCE-UQ formulation), and a novel UQ-aware inverse formulation which leverages our previously established ``joint-inverse" formulation (UQ joint-inverse formulation). We apply these to evaluate the effect of uncertainty in the heart position on the ECGI solutions across a range of ECGI datasets.&#xD;&#xD;Main Results:&#xD;We demonstrated the ability of our UQ-ECGI formulations to characterize the effect of parameter uncertainty on the ECGI inverse problem. We found that while the PCE-UQ inverse solution provided more complex outputs such as sensitivities and standard deviation, the UQ joint-inverse solution provided a more interpretable output in the form of a single ECGI solution. We find that between these two methods we are able to assess a wide range of effects that heart position variability has on the ECGI solution.&#xD;&#xD;Significance:&#xD;This study, for the first time, characterizes in detail the application of UQ to the ECGI inverse problem. We demonstrated how UQ can provide insight into the behavior of ECGI using variability in cardiac position as a test case. This study lays the groundwork for future development of UQ-ECGI studies, as well as future development of ECGI formulations which are robust to input parameter variability.

T.C. Bidone, D.J. Odde. “Multiscale models of integrins and cellular adhesions,” In Current Opinion in Structural Biology, Vol. 80, Elsevier, 2023.


Computational models of integrin-based adhesion complexes have revealed important insights into the mechanisms by which cells establish connections with their external environment. However, how changes in conformation and function of individual adhesion proteins regulate the dynamics of whole adhesion complexes remains largely elusive. This is because of the large separation in time and length scales between the dynamics of individual adhesion proteins (nanoseconds and nanometers) and the emergent dynamics of the whole adhesion complex (seconds and micrometers), and the limitations of molecular simulation approaches in extracting accurate free energies, conformational transitions, reaction mechanisms, and kinetic rates, that can inform mechanisms at the larger scales. In this review, we discuss models of integrin-based adhesion complexes and highlight their main findings regarding: (i) the conformational transitions of integrins at the molecular and macromolecular scales and (ii) the molecular clutch mechanism at the mesoscale. Lastly, we present unanswered questions in the field of modeling adhesions and propose new ideas for future exciting modeling opportunities.

B. Borotikar, T.E.M. Mutsvangwa, S..Y Elhabian, E.Audenaert . “Editorial: Statistical model-based computational biomechanics: applications in joints and internal organs,” In Frontiers in Bioengineering and Biotechnology, Vol. 11, 2023.
DOI: 10.3389/fbioe.2023.1232464

S. Brink, M. McKinsey, D. Boehme, C. Scully-Allison, I. Lumsden, D. Hawkins, T. Burgess, V. Lama, J. Luettgau, K.E. Isaacs, M. Taufer, O. Pearce. “Thicket: Seeing the Performance Experiment Forest for the Individual Run Trees,” In HPDC ’23, ACM, 2023.


Thicket is an open-source Python toolkit for Exploratory Data Analysis (EDA) of multi-run performance experiments. It enables an understanding of optimal performance configuration for large-scale application codes. Most performance tools focus on a single execution (e.g., single platform, single measurement tool, single scale). Thicket bridges the gap to convenient analysis in multi-dimensional, multi-scale, multi-architecture, and multi-tool performance datasets by providing an interface for interacting with the performance data.

Thicket has a modular structure composed of three components. The first component is a data structure for multi-dimensional performance data, which is composed automatically on the portable basis of call trees, and accommodates any subset of dimensions present in the dataset. The second is the metadata, enabling distinction and sub-selection of dimensions in performance data. The third is a dimensionality reduction mechanism, enabling analysis such as computing aggregated statistics on a given data dimension. Extensible mechanisms are available for applying analyses (e.g., top-down on Intel CPUs), data science techniques (e.g., K-means clustering from scikit-learn), modeling performance (e.g., Extra-P), and interactive visualization. We demonstrate the power and flexibility of Thicket through two case studies, first with the open-source RAJA Performance Suite on CPU and GPU clusters and another with a arge physics simulation run on both a traditional HPC cluster and an AWS Parallel Cluster instance.

S. Campbell, M. C. Mendoza, A. Rammohan, M. E. McKenzie, T. C. Bidone. “Computational model of integrin adhesion elongation under an actin fiber,” In PLOS Computatonal Biology, Vol. 19, No. 7, Public Library of Science, pp. 1-19. 7, 2023.
DOI: 10.1371/journal.pcbi.1011237


Cells create physical connections with the extracellular environment through adhesions. Nascent adhesions form at the leading edge of migrating cells and either undergo cycles of disassembly and reassembly, or elongate and stabilize at the end of actin fibers. How adhesions assemble has been addressed in several studies, but the exact role of actin fibers in the elongation and stabilization of nascent adhesions remains largely elusive. To address this question, here we extended our computational model of adhesion assembly by incorporating an actin fiber that locally promotes integrin activation. The model revealed that an actin fiber promotes adhesion stabilization and elongation. Actomyosin contractility from the fiber also promotes adhesion stabilization and elongation, by strengthening integrin-ligand interactions, but only up to a force threshold. Above this force threshold, most integrin-ligand bonds fail, and the adhesion disassembles. In the absence of contraction, actin fibers still support adhesions stabilization. Collectively, our results provide a picture in which myosin activity is dispensable for adhesion stabilization and elongation under an actin fiber, offering a framework for interpreting several previous experimental observations.

K.R. Carney, A.M. Khan, S. Stam, S.C. Samson, N. Mittal, S. Han, T.C. Bidone, M. Mendoza. “Nascent adhesions shorten the period of lamellipodium protrusion through the Brownian ratchet mechanism,” In Mol Biol Cell, 2023.


Directional cell migration is driven by the conversion of oscillating edge motion into lasting periods of leading edge protrusion. Actin polymerization against the membrane and adhesions control edge motion, but the exact mechanisms that determine protrusion period remain elusive. We addressed this by developing a computational model in which polymerization of actin filaments against a deformable membrane and variable adhesion dynamics support edge motion. Consistent with previous reports, our model showed that actin polymerization and adhesion lifetime power protrusion velocity. However, increasing adhesion lifetime decreased the protrusion period. Measurements of adhesion lifetime and edge motion in migrating cells confirmed that adhesion lifetime is associated with and promotes protrusion velocity, but decreased duration. Our model showed that adhesions’ control of protrusion persistence originates from the Brownian ratchet mechanism for actin filament polymerization. With longer adhesion lifetime or increased adhesion density, the proportion of actin filaments tethered to the substrate increased, maintaining filaments against the cell membrane. The reduced filament-membrane distance generated pushing force for high edge velocity, but limited further polymerization needed for protrusion duration. We propose a mechanism for cell edge protrusion in which adhesion strength regulates actin filament polymerization to control the periods of leading edge protrusion.

H. Dai, M. Penwarden, R.M. Kirby, S. Joshi. “Neural Operator Learning for Ultrasound Tomography Inversion,” Subtitled “arXiv:2304.03297v1,” 2023.


Neural operator learning as a means of mapping between complex function spaces has garnered significant attention in the field of computational science and engineering (CS&E). In this paper, we apply Neural operator learning to the time-of-flight ultrasound computed tomography (USCT) problem. We learn the mapping between time-of-flight (TOF) data and the heterogeneous sound speed field using a full-wave solver to generate the training data. This novel application of operator learning circumnavigates the need to solve the computationally intensive iterative inverse problem. The operator learns the non-linear mapping offline and predicts the heterogeneous sound field with a single forward pass through the model. This is the first time operator learning has been used for ultrasound tomography and is the first step in potential real-time predictions of soft tissue distribution for tumor identification in beast imaging.

H. Dai, M. Bauer, P.T. Fletcher, S. Joshi. “Modeling the Shape of the Brain Connectome via Deep Neural Networks,” In Information Processing in Medical Imaging, Springer Nature Switzerland, pp. 291--302. 2023.
ISBN: 978-3-031-34048-2


The goal of diffusion-weighted magnetic resonance imaging (DWI) is to infer the structural connectivity of an individual subject's brain in vivo. To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. This leads to the challenging problem of estimating a Riemannian metric that is compatible with the DWI data, i.e., a metric such that the geodesic curves represent individual fiber tracts of the connectomics. We reduce this problem to that of solving a highly nonlinear set of partial differential equations (PDEs) and study the applicability of convolutional encoder-decoder neural networks (CEDNNs) for solving this geometrically motivated PDE. Our method achieves excellent performance in the alignment of geodesics with white matter pathways and tackles a long-standing issue in previous geodesic tractography methods: the inability to recover crossing fibers with high fidelity. Code is available at

Y. Ding, J. Wilburn, H. Shrestha, A. Ndlovu, K. Gadhave, C. Nobre, A. Lex, L. Harrison. “reVISit: Supporting Scalable Evaluation of Interactive Visualizations,” Subtitled “OSF Preprints,” 2023.


reVISit is an open-source software toolkit and framework for creating, deploying, and monitoring empirical visualization studies. Running a quality empirical study in visualization can be demanding and resource-intensive, requiring substantial time, cost, and technical expertise from the research team. These challenges are amplified as research norms trend towards more complex and rigorous study methodologies, alongside a growing need to evaluate more complex interactive visualizations. reVISit aims to ameliorate these challenges by introducing a domain-specific language for study set-up, and a series of software components, such as UI elements, behavior provenance, and an experiment monitoring and management interface. Together with interactive or static stimuli provided by the experimenter, these are compiled to a ready-to-deploy web-based experiment. We demonstrate reVISit's functionality by re-implementing two studies – a graphical perception task and a more complex, interactive study. reVISit is an open-source community project, available at