Designed especially for neurobiologists, FluoRender is an interactive tool for multi-channel fluorescence microscopy data visualization and analysis.
Deep brain stimulation
BrainStimulator is a set of networks that are used in SCIRun to perform simulations of brain stimulation such as transcranial direct current stimulation (tDCS) and magnetic transcranial stimulation (TMS).
Developing software tools for science has always been a central vision of the SCI Institute.

SCI Publications


J.W. Beiriger, W. Tao, M.K. Bruce, E. Anstadt, C. Christiensen, J. Smetona, R. Whitaker, J. Goldstein. “CranioRate TM: An Image-Based, Deep-Phenotyping Analysis Toolset and Online Clinician Interface for Metopic Craniosynostosis,” In Plastic and Reconstructive Surgery, 2023.


The diagnosis and management of metopic craniosynostosis involves subjective decision-making at the point of care. The purpose of this work is to describe a quantitative severity metric and point-of-care user interface to aid clinicians in the management of metopic craniosynostosis and to provide a platform for future research through deep phenotyping.

Two machine-learning algorithms were developed that quantify the severity of craniosynostosis – a supervised model specific to metopic craniosynostosis (Metopic Severity Score) and an unsupervised model used for cranial morphology in general (Cranial Morphology Deviation). CT imaging from multiple institutions were compiled to establish the spectrum of severity and a point-of-care tool was developed and validated.

Over the study period (2019-2021), 254 patients with metopic craniosynostosis and 92 control patients who underwent CT scan between the ages of 6 and 18 months were included. Scans were processed using an unsupervised machine-learning based dysmorphology quantification tool, CranioRate TM. The average Metopic severity score (MSS) for normal controls was 0.0±1.0 and for metopic synostosis was 4.9±2.3 (p<0.001). The average Cranial Morphology Deviation (CMD) for normal controls was 85.2±19.2 and for metopic synostosis was 189.9±43.4 (p<0.001). A point-of-care user interface ( has processed 46 CT images from 10 institutions.

The resulting quantification of severity using MSS and CMD has shown an improved capacity, relative to conventional measures, to automatically classify normal controls versus patients with metopic synostosis. We have mathematically described, in an objective and quantifiable manner, the distribution of phenotypes in metopic craniosynostosis.

M. Berzins. “Error Estimation for the Material Point and Particle in Cell Methods,” In admos2023, 2023.


The Material Point Method (MPM) is widely used for challenging applications in engineering, and animation. The complexity of the method makes error estimation challenging. Error analysis of a simple MPM method is undertaken and the global error is shown to be first order in space and time for a widely-used variant of the method. Computational experiments illustrate the estimated accuracy.

J. A. Bergquist, B. Zenger, L. Rupp, A. Busatto, J. D. Tate, D. H. Brooks, A. Narayan, R. MacLeod. “Uncertainty quantification of the effect of cardiac position variability in the inverse problem of electrocardiographic imaging,” In Journal of Physiological Measurement, IOP Science, 2023.
DOI: 10.1088/1361-6579/acfc32


Objective:&#xD;Electrocardiographic imaging (ECGI) is a functional imaging modality that consists of two related problems, the forward problem of reconstructing body surface electrical signals given cardiac bioelectric activity, and the inverse problem of reconstructing cardiac bioelectric activity given measured body surface signals. ECGI relies on a model for how the heart generates bioelectric signals which is subject to variability in inputs. The study of how uncertainty in model inputs affects the model output is known as uncertainty quantification (UQ). This study establishes develops, and characterizes the application of UQ to ECGI.&#xD;&#xD;Approach:&#xD;We establish two formulations for applying UQ to ECGI: a polynomial chaos expansion (PCE) based parametric UQ formulation (PCE-UQ formulation), and a novel UQ-aware inverse formulation which leverages our previously established ``joint-inverse" formulation (UQ joint-inverse formulation). We apply these to evaluate the effect of uncertainty in the heart position on the ECGI solutions across a range of ECGI datasets.&#xD;&#xD;Main Results:&#xD;We demonstrated the ability of our UQ-ECGI formulations to characterize the effect of parameter uncertainty on the ECGI inverse problem. We found that while the PCE-UQ inverse solution provided more complex outputs such as sensitivities and standard deviation, the UQ joint-inverse solution provided a more interpretable output in the form of a single ECGI solution. We find that between these two methods we are able to assess a wide range of effects that heart position variability has on the ECGI solution.&#xD;&#xD;Significance:&#xD;This study, for the first time, characterizes in detail the application of UQ to the ECGI inverse problem. We demonstrated how UQ can provide insight into the behavior of ECGI using variability in cardiac position as a test case. This study lays the groundwork for future development of UQ-ECGI studies, as well as future development of ECGI formulations which are robust to input parameter variability.

T.C. Bidone, D.J. Odde. “Multiscale models of integrins and cellular adhesions,” In Current Opinion in Structural Biology, Vol. 80, Elsevier, 2023.


Computational models of integrin-based adhesion complexes have revealed important insights into the mechanisms by which cells establish connections with their external environment. However, how changes in conformation and function of individual adhesion proteins regulate the dynamics of whole adhesion complexes remains largely elusive. This is because of the large separation in time and length scales between the dynamics of individual adhesion proteins (nanoseconds and nanometers) and the emergent dynamics of the whole adhesion complex (seconds and micrometers), and the limitations of molecular simulation approaches in extracting accurate free energies, conformational transitions, reaction mechanisms, and kinetic rates, that can inform mechanisms at the larger scales. In this review, we discuss models of integrin-based adhesion complexes and highlight their main findings regarding: (i) the conformational transitions of integrins at the molecular and macromolecular scales and (ii) the molecular clutch mechanism at the mesoscale. Lastly, we present unanswered questions in the field of modeling adhesions and propose new ideas for future exciting modeling opportunities.

B. Borotikar, T.E.M. Mutsvangwa, S..Y Elhabian, E.Audenaert . “Editorial: Statistical model-based computational biomechanics: applications in joints and internal organs,” In Frontiers in Bioengineering and Biotechnology, Vol. 11, 2023.
DOI: 10.3389/fbioe.2023.1232464

S. Brink, M. McKinsey, D. Boehme, C. Scully-Allison, I. Lumsden, D. Hawkins, T. Burgess, V. Lama, J. Luettgau, K.E. Isaacs, M. Taufer, O. Pearce. “Thicket: Seeing the Performance Experiment Forest for the Individual Run Trees,” In HPDC ’23, ACM, 2023.


Thicket is an open-source Python toolkit for Exploratory Data Analysis (EDA) of multi-run performance experiments. It enables an understanding of optimal performance configuration for large-scale application codes. Most performance tools focus on a single execution (e.g., single platform, single measurement tool, single scale). Thicket bridges the gap to convenient analysis in multi-dimensional, multi-scale, multi-architecture, and multi-tool performance datasets by providing an interface for interacting with the performance data.

Thicket has a modular structure composed of three components. The first component is a data structure for multi-dimensional performance data, which is composed automatically on the portable basis of call trees, and accommodates any subset of dimensions present in the dataset. The second is the metadata, enabling distinction and sub-selection of dimensions in performance data. The third is a dimensionality reduction mechanism, enabling analysis such as computing aggregated statistics on a given data dimension. Extensible mechanisms are available for applying analyses (e.g., top-down on Intel CPUs), data science techniques (e.g., K-means clustering from scikit-learn), modeling performance (e.g., Extra-P), and interactive visualization. We demonstrate the power and flexibility of Thicket through two case studies, first with the open-source RAJA Performance Suite on CPU and GPU clusters and another with a arge physics simulation run on both a traditional HPC cluster and an AWS Parallel Cluster instance.

S. Campbell, M. C. Mendoza, A. Rammohan, M. E. McKenzie, T. C. Bidone. “Computational model of integrin adhesion elongation under an actin fiber,” In PLOS Computatonal Biology, Vol. 19, No. 7, Public Library of Science, pp. 1-19. 7, 2023.
DOI: 10.1371/journal.pcbi.1011237


Cells create physical connections with the extracellular environment through adhesions. Nascent adhesions form at the leading edge of migrating cells and either undergo cycles of disassembly and reassembly, or elongate and stabilize at the end of actin fibers. How adhesions assemble has been addressed in several studies, but the exact role of actin fibers in the elongation and stabilization of nascent adhesions remains largely elusive. To address this question, here we extended our computational model of adhesion assembly by incorporating an actin fiber that locally promotes integrin activation. The model revealed that an actin fiber promotes adhesion stabilization and elongation. Actomyosin contractility from the fiber also promotes adhesion stabilization and elongation, by strengthening integrin-ligand interactions, but only up to a force threshold. Above this force threshold, most integrin-ligand bonds fail, and the adhesion disassembles. In the absence of contraction, actin fibers still support adhesions stabilization. Collectively, our results provide a picture in which myosin activity is dispensable for adhesion stabilization and elongation under an actin fiber, offering a framework for interpreting several previous experimental observations.

K.R. Carney, A.M. Khan, S. Stam, S.C. Samson, N. Mittal, S. Han, T.C. Bidone, M. Mendoza. “Nascent adhesions shorten the period of lamellipodium protrusion through the Brownian ratchet mechanism,” In Mol Biol Cell, 2023.


Directional cell migration is driven by the conversion of oscillating edge motion into lasting periods of leading edge protrusion. Actin polymerization against the membrane and adhesions control edge motion, but the exact mechanisms that determine protrusion period remain elusive. We addressed this by developing a computational model in which polymerization of actin filaments against a deformable membrane and variable adhesion dynamics support edge motion. Consistent with previous reports, our model showed that actin polymerization and adhesion lifetime power protrusion velocity. However, increasing adhesion lifetime decreased the protrusion period. Measurements of adhesion lifetime and edge motion in migrating cells confirmed that adhesion lifetime is associated with and promotes protrusion velocity, but decreased duration. Our model showed that adhesions’ control of protrusion persistence originates from the Brownian ratchet mechanism for actin filament polymerization. With longer adhesion lifetime or increased adhesion density, the proportion of actin filaments tethered to the substrate increased, maintaining filaments against the cell membrane. The reduced filament-membrane distance generated pushing force for high edge velocity, but limited further polymerization needed for protrusion duration. We propose a mechanism for cell edge protrusion in which adhesion strength regulates actin filament polymerization to control the periods of leading edge protrusion.

B. Charoenwong, R.M. Kirby, J. Reiter. “Computer Science Abstractions To Help Reason About Decentralized Stablecoin Design,” In IEEE Access, IEEE, 2023.


Computer science as a discipline is known for its penchant for using abstractions as a tool for reasoning. It is no surprise that computer science might have something valuable to lend to the world of decentralized stablecoin design, as it is in fact a “computing" problem. In this paper, we examine the possibility of a decentralized and capital-efficient stablecoin using smart contracts that algorithmically trade to maintain stability and study the potential new functionality that smart contracts enable. By exploiting traditional abstractions from computer science, we show that a capital-efficient algorithmic stablecoin cannot be provably stable. Additionally, we provide a formal exposition of the workings of Central Bank Digital Currencies, connecting this to the space of possible stablecoin designs. We then discuss several outstanding conjectures from both academics and practitioners and finally highlight the regulatory similarities between money-market funds and working stablecoins. Our work builds upon the current and growing interplay between the realms of engineering and financial services, and it also demonstrates how ways of thinking as a computer scientist can aid practitioners. We believe this research is vital for understanding and developing the future of financial technology.

H. Csala, A. Mohan, D. Livescu, A. Arzani. “Modeling Coupled 1D PDEs of Cardiovascular Flow with Spatial Neural ODEs,” In Machine Learning and the Physical Sciences Workshop, NeurIPS 2023, 2023.


Tackling coupled sets of partial differential equations (PDEs) through scientific machine learning presents a complex challenge, but it is essential for developing data-driven physics-based models. We employ a novel approach to model the coupled PDEs that govern the blood flow in stenosed arteries with deformable walls, while incorporating realistic inlet flow waveforms. We propose a low-dimensional model based on neural ordinary differential equations (ODEs) inspired by 1D blood flow equations. Our unique approach formulates the problem as ODEs in space rather than time, effectively overcoming issues related to time-dependent boundary conditions and PDE coupling. This innovative framework accurately captures flow rate and area variations, even when extrapolating to unseen waveforms. The promising results from this approach offer a different perspective on deploying neural ODEs to model coupled PDEs with unsteady boundary conditions, which are prevalent in many engineering applications.

H. Dai, M. Penwarden, R.M. Kirby, S. Joshi. “Neural Operator Learning for Ultrasound Tomography Inversion,” Subtitled “arXiv:2304.03297v1,” 2023.


Neural operator learning as a means of mapping between complex function spaces has garnered significant attention in the field of computational science and engineering (CS&E). In this paper, we apply Neural operator learning to the time-of-flight ultrasound computed tomography (USCT) problem. We learn the mapping between time-of-flight (TOF) data and the heterogeneous sound speed field using a full-wave solver to generate the training data. This novel application of operator learning circumnavigates the need to solve the computationally intensive iterative inverse problem. The operator learns the non-linear mapping offline and predicts the heterogeneous sound field with a single forward pass through the model. This is the first time operator learning has been used for ultrasound tomography and is the first step in potential real-time predictions of soft tissue distribution for tumor identification in beast imaging.

H. Dai, M. Bauer, P.T. Fletcher, S. Joshi. “Modeling the Shape of the Brain Connectome via Deep Neural Networks,” In Information Processing in Medical Imaging, Springer Nature Switzerland, pp. 291--302. 2023.
ISBN: 978-3-031-34048-2


The goal of diffusion-weighted magnetic resonance imaging (DWI) is to infer the structural connectivity of an individual subject's brain in vivo. To statistically study the variability and differences between normal and abnormal brain connectomes, a mathematical model of the neural connections is required. In this paper, we represent the brain connectome as a Riemannian manifold, which allows us to model neural connections as geodesics. This leads to the challenging problem of estimating a Riemannian metric that is compatible with the DWI data, i.e., a metric such that the geodesic curves represent individual fiber tracts of the connectomics. We reduce this problem to that of solving a highly nonlinear set of partial differential equations (PDEs) and study the applicability of convolutional encoder-decoder neural networks (CEDNNs) for solving this geometrically motivated PDE. Our method achieves excellent performance in the alignment of geodesics with white matter pathways and tackles a long-standing issue in previous geodesic tractography methods: the inability to recover crossing fibers with high fidelity. Code is available at

D. Dai, Y. Epshteyn, A. Narayan. “Energy Stable and Structure-Preserving Schemes for the Stochastic Galerkin Shallow Water Equations,” Subtitled “arXiv:2310.06229,” 2023.


The shallow water flow model is widely used to describe water flows in rivers, lakes, and coastal areas. Accounting for uncertainty in the corresponding transport-dominated non-linear PDE models presents theoretical and numerical challenges that motivate the central advances of this paper. Starting with a spatially one-dimensional hyperbolicity-preserving, positivity-preserving stochastic Galerkin formulation of the parametric/uncertain shallow water equations, we derive an entropy-entropy flux pair for the system. We exploit this entropy-entropy flux pair to construct structure-preserving second-order energy conservative, and first- and second-order energy stable finite volume schemes for the stochastic Galerkin shallow water system. The performance of the methods is illustrated on several numerical experiments.

H. Dai, V. Sarkar, C. Dial, M.D. Foote, Y. Hitchcock, S. Joshi, B. Salter. “High-Fidelity CT on Rails-Based Characterization of Delivered Dose Variation in Conformal Head and Neck Treatments,” In Applied Radiation Oncology, 2023.
DOI: 10.1101/2023.04.07.23288305


Objective: This study aims to characterize dose variations from the original plan for a cohort of patients with head-and-neck cancer (HNC) using high-quality CT on rails (CTOR) datasets and evaluate a predictive model for identifying patients needing replanning.

Materials and Methods: In total, 74 patients with HNC treated on our CTOR-equipped machine were evaluated in this retrospective study. Patients were treated at our facility using in-room, CTOR image guidance—acquiring CTOR kV fan beam CT images on a weekly to near-daily basis. For each patient, a particular day’s delivered treatment dose was calculated by applying the approved, planned beam set to the post image-guided alignment CT image of the day. Total accumulated delivered dose distributions were calculated and compared with the planned dose distribution, and differences were characterized by comparison of dose and biological response statistics.

Results: The majority of patients in the study saw excellent agreement between planned and delivered dose distribution in targets—the mean deviations of dose received by 95% and 98% of the planning target volumes of the cohort are −0.7% and −1.3%, respectively. In critical organs, we saw a +6.5% mean deviation of mean dose in the parotid glands, −2.3% mean deviation of maximum dose in the brainstem, and +0.7% mean deviation of maximum dose in the spinal cord. Of 74 patients, 10 experienced nontrivial variation of delivered parotid dose, which resulted in a normal tissue complication probability (NTCP) increase compared with the anticipated NTCP in the original plan, ranging from 11% to 44%.

Conclusion: We determined that a midcourse evaluation of dose deviation was not effective in predicting the need for replanning for our patient cohorts. The observed nontrivial dose difference to parotid gland delivered dose suggests that even when rigorous, high-quality image guidance is performed, clinically concerning variations to predicted dose delivery can still occur.

Y. Ding, J. Wilburn, H. Shrestha, A. Ndlovu, K. Gadhave, C. Nobre, A. Lex, L. Harrison. “reVISit: Supporting Scalable Evaluation of Interactive Visualizations,” Subtitled “OSF Preprints,” 2023.


reVISit is an open-source software toolkit and framework for creating, deploying, and monitoring empirical visualization studies. Running a quality empirical study in visualization can be demanding and resource-intensive, requiring substantial time, cost, and technical expertise from the research team. These challenges are amplified as research norms trend towards more complex and rigorous study methodologies, alongside a growing need to evaluate more complex interactive visualizations. reVISit aims to ameliorate these challenges by introducing a domain-specific language for study set-up, and a series of software components, such as UI elements, behavior provenance, and an experiment monitoring and management interface. Together with interactive or static stimuli provided by the experimenter, these are compiled to a ready-to-deploy web-based experiment. We demonstrate reVISit's functionality by re-implementing two studies – a graphical perception task and a more complex, interactive study. reVISit is an open-source community project, available at

S. Dubey, T. Kataria, B. Knudsen, S.Y. Elhabian. “Structural Cycle GAN for Virtual Immunohistochemistry Staining of Gland Markers in the Colon,” Subtitled “arXiv:2308.13182,” 2023.


With the advent of digital scanners and deep learning, diagnostic operations may move from a microscope to a desktop. Hematoxylin and Eosin (H&E) staining is one of the most frequently used stains for disease analysis, diagnosis, and grading, but pathologists do need different immunohistochemical (IHC) stains to analyze specific structures or cells. Obtaining all of these stains (H&E and different IHCs) on a single specimen is a tedious and time-consuming task. Consequently, virtual staining has emerged as an essential research direction. Here, we propose a novel generative model, Structural Cycle-GAN (SC-GAN), for synthesizing IHC stains from H&E images, and vice versa. Our method expressly incorporates structural information in the form of edges (in addition to color data) and employs attention modules exclusively in the decoder of the proposed generator model. This integration enhances feature localization and preserves contextual information during the generation process. In addition, a structural loss is incorporated to ensure accurate structure alignment between the generated and input markers. To demonstrate the efficacy of the proposed model, experiments are conducted with two IHC markers emphasizing distinct structures of glands in the colon: the nucleus of epithelial cells (CDX2) and the cytoplasm (CK818). Quantitative metrics such as FID and SSIM are frequently used for the analysis of generative models, but they do not correlate explicitly with higher-quality virtual staining results. Therefore, we propose two new quantitative metrics that correlate directly with the virtual staining specificity of IHC markers.

M. Falk, V. Tobiasson, A. Bock, C. Hansen, A. Ynnerman. “A Visual Environment for Data Driven Protein Modeling and Validation,” In IEEE Transactions on Visualization and Computer Graphics, IEEE, pp. 1-11. 2023.
DOI: 10.1109/TVCG.2023.3286582


In structural biology, validation and verification of new atomic models are crucial and necessary steps which limit the production of reliable molecular models for publications and databases. An atomic model is the result of meticulous modeling and matching and is evaluated using a variety of metrics that provide clues to improve and refine the model so it fits our understanding of molecules and physical constraints. In cryo electron microscopy (cryo-EM) the validation is also part of an iterative modeling process in which there is a need to judge the quality of the model during the creation phase. A shortcoming is that the process and results of the validation are rarely communicated using visual metaphors. This work presents a visual framework for molecular validation. The framework was developed in close collaboration with domain experts in a participatory design process. Its core is a novel visual representation based on 2D heatmaps that shows all available validation metrics in a linear fashion, presenting a global overview of the atomic model and provide domain experts with interactive analysis tools. Additional information stemming from the underlying data, such as a variety of local quality measures, is used to guide the user's attention toward regions of higher relevance. Linked with the heatmap is a three-dimensional molecular visualization providing the spatial context of the structures and chosen metrics. Additional views of statistical properties of the structure are included in the visual framework. We demonstrate the utility of the framework and its visual guidance with examples from cryo-EM.

S. Fang, S. Zhe, H.M. Lin, A.A. Azad, H. Fettke, E.M. Kwan, L. Horvath, B. Mak, T. Zheng, P. Du, S. Jia, R.M. Kirby, M. Kohli. “Multi-Omic Integration of Blood-Based Tumor-Associated Genomic and Lipidomic Profiles Using Machine Learning Models in Metastatic Prostate Cancer,” In Clinical Cancer Informatics, 2023.


To determine prognostic and predictive clinical outcomes in metastatic hormone-sensitive prostate cancer (mHSPC) and metastatic castrate-resistant prostate cancer (mCRPC) on the basis of a combination of plasma-derived genomic alterations and lipid features in a longitudinal cohort of patients with advanced prostate cancer.

A multifeature classifier was constructed to predict clinical outcomes using plasma-based genomic alterations detected in 120 genes and 772 lipidomic species as informative features in a cohort of 71 patients with mHSPC and 144 patients with mCRPC. Outcomes of interest were collected over 11 years of follow-up. These included in mHSPC state early failure of androgen-deprivation therapy (ADT) and exceptional responders to ADT; early death (poor prognosis) and long-term survivors in mCRPC state. The approach was to build binary classification models that identified discriminative candidates with optimal weights to predict outcomes. To achieve this, we built multi-omic feature-based classifiers using traditional machine learning (ML) methods, including logistic regression with sparse regularization, multi-kernel Gaussian process regression, and support vector machines.

The levels of specific ceramides (d18:1/14:0 and d18:1/17:0), and the presence of CHEK2 mutations, AR amplification, and RB1 deletion were identified as the most crucial factors associated with clinical outcomes. Using ML models, the optimal multi-omics feature combination determined resulted in AUC scores of 0.751 for predicting mHSPC survival and 0.638 for predicting ADT failure; and in mCRPC state, 0.687 for prognostication and 0.727 for exceptional survival. The models were observed to be superior than using a limited candidate number of features for developing multi-omic prognostic and predictive signatures.

Using a ML approach that incorporates multiple omic features improves the prediction accuracy for metastatic prostate cancer outcomes significantly. Validation of these models will be needed in independent data sets in future.

S. Fang, X. Yu, S. Li, Z. Wang, R. Kirby, S. Zhe. “Streaming Factor Trajectory Learning for Temporal Tensor Decomposition,” Subtitled “,” 2023.


Practical tensor data is often along with time information. Most existing temporal decomposition approaches estimate a set of fixed factors for the objects in each tensor mode, and hence cannot capture the temporal evolution of the objects' representation. More important, we lack an effective approach to capture such evolution from streaming data, which is common in real-world applications. To address these issues, we propose Streaming Factor Trajectory Learning for temporal tensor decomposition. We use Gaussian processes (GPs) to model the trajectory of factors so as to flexibly estimate their temporal evolution. To address the computational challenges in handling streaming data, we convert the GPs into a state-space prior by constructing an equivalent stochastic differential equation (SDE). We develop an efficient online filtering algorithm to estimate a decoupled running posterior of the involved factor states upon receiving new data. The decoupled estimation enables us to conduct standard Rauch-Tung-Striebel smoothing to compute the full posterior of all the trajectories in parallel, without the need for revisiting any previous data. We have shown the advantage of SFTL in both synthetic tasks and real-world applications.

S. Fang, M. Cooley, D. Long, S. Li, R. Kirby, S. Zhe. “Solving High Frequency and Multi-Scale PDEs with Gaussian Processes,” Subtitled “arXiv:2311.04465,” 2023.


Machine learning based solvers have garnered much attention in physical simulation and scientific computing, with a prominent example, physics-informed neural networks (PINNs). However, PINNs often struggle to solve high-frequency and multi-scale PDEs, which can be due to the spectral bias during neural network training. To address this problem, we resort to the Gaussian process (GP) framework. To flexibly capture the dominant frequencies, we model the power spectrum of the PDE solution with a student t mixture or Gaussian mixture. We then apply inverse Fourier transform to obtain the covariance function (according to the Wiener-Khinchin theorem). The covariance derived from the Gaussian mixture spectrum corresponds to the known spectral mixture kernel. We are the first to discover its rationale and effectiveness for PDE solving. Next, we estimate the mixture weights in the log domain, which we show is equivalent to placing a Jeffreys prior. It automatically induces sparsity, prunes excessive frequencies, and adjusts the remaining toward the ground truth. Third, to enable efficient and scalable computation on massive collocation points, which are critical to capture high frequencies, we place the collocation points on a grid, and multiply our covariance function at each input dimension. We use the GP conditional mean to predict the solution and its derivatives so as to fit the boundary condition and the equation itself. As a result, we can derive a Kronecker product structure in the covariance matrix. We use Kronecker product properties and multilinear algebra to greatly promote computational efficiency and scalability, without any low-rank approximations. We show the advantage of our method in systematic experiments.