Technical Thrust Area
Uncertainty Quantification and Probabilistic Modeling

Committee
 
Chair: Abani Patra, Tufts University
Vice-Chair: Serge Prudhomme, Polytechnique Montréal
Members-at-Large:
Johann Guilleminot, Duke University
Jian-Xun Wang, Notre Dame University

Members
Not a member? Click here to learn how to become one.

 

Upcoming Events

USACM Thematic Conference

Uncertainty Quantification for Machine Learning Integrated Physics Modeling (MLIP), August 18-19, 2022, Arlington, Virginia

Monthly Webinar

Webinars will resume Fall 2022.

Past Webinars

May 12, 2022

 

Speaker: Jian-xun Wang, University of Notre Dame; Discussant: Michael Brenner, Harvard University

 

TitleLeveraging Physics-Induced Bias in Scientific Machine Learning for Computational Mechanics
– Physics-Informed, Structure-Preserved Learning for Problems with Complex Geometries

 

First-principle modeling and simulation of complex systems based on partial differential equations (PDEs) and numerical discretization have been developed for decades and achieved great success. Nonetheless, traditional numerical solvers face significant challenges in many practical scenarios, e.g., inverse problems, uncertainty quantification, design, and optimizations. Moreover, for complex systems, the governing equations might not be fully known due to a lack of complete understanding of the underlying physics, for which a first-principled numerical solver cannot be built. Recent advances in data science. and machine learning, combined with the ever-increasing availability of high-fidelity simulation and measurement data, open up new opportunities for developing data-enabled computational mechanics models. Although the state-of-the-art machine/deep learning techniques hold great promise, there are still many challenges - e.g., requirement of “big data”, the challenge in generalizability/extrapolibity, lack of interpretability/explainability, etc. On the other hand, there is often a richness of prior knowledge of the systems, including physical laws and phenomenological principles, which can be leveraged in this regard. Thus, there is an urgent need for fundamentally new and transformative machine learning techniques, closely grounded in physics, to address the aforementioned challenges in computational mechanics problems.

 

This talk will briefly discuss our recent developments of scientific machine learning for computational mechanics, focusing on several different aspects of how to bake physics-induced bias into machine/deep learning models for data-enabled predictive modeling. Specifically, the following topics will be covered: (1) PDE-structure preserved deep learning, where the neural network architectures are built by preserving mathematical structures of the (partially) known governing physics for predicting spatiotemporal dynamics, (2) physics-informed geometric deep learning for predictive modeling involving complex geometries and irregular domains.

 

Bio: Dr. Jian-xun Wang is an assistant professor of Aerospace and Mechanical Engineering at the University of Notre Dame. He received a Ph.D. in Aerospace Engineering from Virginia Tech in 2017 and was a postdoctoral scholar at UC Berkeley before joining Notre Dame in 2018. He is a recipient of the 2021 NSF CAREER Award. His research focuses on scientific machine learning, data-enabled computational modeling, Bayesian data assimilation, and uncertainty quantification.

 

May 5, 2022

 

Speaker: Ioannis Kougioumtzoglou, Columbia University; Discussant: George Deodatis, Columbia University

 

TitlePath Integrals in Stochastic Engineering Dynamics

 

Ever-increasing computational capabilities, development of potent signal processing tools, as well as advanced experimental setups have contributed to a highly sophisticated modeling of engineering systems and related excitations. As a result, the form of the governing equations has become highly complex from a mathematics perspective. Examples include high dimensionality, complex nonlinearities, joint time-frequency representations, as well as generalized/fractional calculus modeling. In many cases even the deterministic solution of such equations is an open issue and an active research topic. Clearly, solving the stochastic counterparts of these equations becomes orders of magnitude more challenging. To address this issue, the speaker and co-workers have developed recently a solution framework, based on the concept of Wiener path integral, for stochastic response analysis and reliability assessment of diverse dynamical systems of engineering interest. Significant novelties and advantages that will be highlighted in this talk include:
i) The methodology can readily account for complex nonlinear/hysteretic behaviors, for fractional calculus modeling, as
well as for non-white and non-Gaussian stochastic process representations.
ii) High-dimensional systems can be readily addressed by relying on a variational formulation with mixed fixed/free
boundary conditions, which renders the computational cost independent of the total number of degrees-of-freedom (DOFs) or stochastic dimensions; and thus, the ‘curse of dimensionality’ in stochastic dynamics is circumvented.
iii) The computational cost can be further drastically reduced by employing sparse representations for the system response
probability density function (PDF) in conjunction with compressive sampling schemes and group sparsity concepts.
Moreover, the methodology is capable of uncertainty quantification associated with the system response PDF estimate
by relying on a Bayesian formulation.
Various examples are presented and discussed pertaining to a wide range of engineering systems including, indicatively, a class of nonlinear electromechanical energy harvesters and a 100-DOF stochastically excited nonlinear dynamical system modeling the behavior of large arrays of coupled nano-mechanical oscillators.

 

Bio: Prof. Ioannis A. Kougioumtzoglou received his five-year Diploma in Civil Engineering from the National Technical University of Athens (NTUA) in Greece (2007), and his M.Sc. (2009) and Ph.D. (2011) degrees in Civil Engineering from Rice University, TX, USA. He joined Columbia University in 2014, where he is currently an Associate Professor in the Department of Civil Engineering & Engineering Mechanics. He is the author of approximately 150 publications, including more than 80 technical papers in archival International Journals. Prof. Kougioumtzoglou was chosen in 2018 by the National Science Foundation (NSF) to receive the CAREER Award, which recognizes early-stage scholars with high levels of promise and excellence. He is also the 2014 European Association of Structural Dynamics (EASD) Junior Research Prize recipient “for his innovative influence on the field of nonlinear stochastic dynamics”. Prof. Kougioumtzoglou is an Associate Editor for the ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems and an Editorial Board Member of the following Journals: Mechanical Systems and Signal Processing, Probabilistic Engineering Mechanics, and International Journal of Non-Linear Mechanics. He is also a co-Editor of the Encyclopedia of Earthquake Engineering (Springer) and has served as a Guest Editor for several Special Issues in International Journals. Prof. Kougioumtzoglou has co-chaired the ASCE Engineering Mechanics Institute Conference 2021 and Probabilistic Mechanics & Reliability Conference 2021 (EMI 2021 / PMC 2021) and has served in the scientific and/or organizing committees of many international technical conferences. Prof. Kougioumtzoglou is a member both of the American Society of Civil Engineers (M.ASCE) and the American Society of Mechanical Engineers (M.ASME), while he currently serves as a member of the ASCE EMI committees on Dynamics and on Probabilistic Methods. He is a Licensed Professional Civil Engineer in Greece, and a Fellow of the Higher Education Academy (FHEA) in the UK.

March 16, 2022

Speaker: Danial Faghihi, University at Buffalo; Discussant: J. Tinsley Oden, The University of Texas at Austin

TitleToward Selecting Optimal Predictive Computational Models

An overriding importance in the scientific prediction of complex physical systems is validating mechanistic models in the presence of uncertainties. In addition to data uncertainty and numerical error, uncertainties in selecting the optimal model formulation pose a significant challenge to predictive computational modeling. In a Bayesian setting, the choice of models for computational prediction relies on the available observational data and prior belief on the model and its parameters. This talk discusses a systematic framework in selecting an “optimal” predictive model, among the numerous possible models with different fidelities and complexities, that delivers sufficiently accurate computational prediction. In particular, we extend an adaptive computational framework, known as Occam-Plausibility ALgorithm (OPAL), that leverages Bayesian inference and the notion of model plausibility to select the simplest valid model. The key feature of our modification is the design of model-specific validation experiments to provide observational data reflecting, in some sense, the structure of the target prediction. An application of this framework in selecting an optimal discrete-to-continuum multiscale model for predicting the performance of microscale materials systems will be presented. We will also provide an example of leveraging validated and selected models for predicting heterogeneous tumor morphology in specific subjects and via a scalable solution algorithm to the high-dimensional Bayesian inference. Finally, we will discuss challenges and possible future directions to develop strategies for selecting optimal neural networks in the context of hybrid physical - machine learning multiscale models of mesoporous thermal insulation materials.

February 23, 2022

Speaker: Johann Guilleminot, Duke University; Discussant: Roger Ghanem, University of Southern California

TitleStochastic Modeling for Physics-Consistent Uncertainty Quantification on Constrained Spaces, with Various Applications in Computational Mechanics

In this talk, we discuss the construction of admissible, physics-consistent and identifiable stochastic models for uncertainty quantification. We consider the case of variables taking values in constrained spaces (with boundaries defined by manifolds) and indexed by complex geometries described by nonconvex sets. This setting is relevant to a broad variety of applications in computational mechanics, ranging from mechanical simulations on parts produced by additive manufacturing to multiscale analyses with stochastic connected phases. We first present theoretical and computational procedures to ensure well-posedness and to generate random field representations defined by arbitrary marginal transport maps. The sampling scheme relies, in particular, on the construction of a family of stochastic differential equations driven by an ad hoc space-time process and involves an adaptive step sequence that ensures stability near the boundaries of the state space. Finally, we provide results pertaining to modeling, sampling, and statistical inverse identification for various applications including additive manufacturing, phase-field fracture modeling, multiscale analyses on nonlinear microstructures, and patient-specific computations on soft biological tissues.

December 8, 2021

Speaker: Audrey Olivier, University of Southern California; Discussant: Lori Graham-Brady, Johns Hopkins University

TitleBayesian Learning of Neural Networks for Small or Imbalanced Data Sets

Data-based predictive models such as neural networks are showing great potential to be used in various scientific and engineering fields. They can be used in conjunction with physics-based models to account for missing or hard-to-model physics, or as surrogates to replace high-fidelity, overly costly physics-based simulations. However, in many engineering fields data is expensive to obtain and data scarcity and / or data imbalance is a challenge. Many physical processes are also random in nature and exhibit large aleatory uncertainties. Bayesian methods allow for a comprehensive account of both aleatory and epistemic uncertainties; however, they are challenging to use for overly parameterized problems such as neural networks. This talk will present methods based on variational inference and model averaging for probabilistic training of neural networks. An application in surrogate materials modeling will be presented, where data is scarce as it is obtained from expensive high-fidelity materials simulations. Finally, we will show how this probabilistic approach allows to integrate scientific intuitions by defining a meaningful prior and likelihood for training. The example presented pertains to the prediction of ambulance travel time, using real data provided by the New York City Fire Department.

November 4, 2021

Speaker: Catherine Gorlé; Discussant: Gianluca Iaccarino

TitleUncertainty Quantification and Data Assimilation for Predictive Computational Wind Engineering

Computational fluid dynamics (CFD) can inform sustainable design of buildings and cities in terms of optimizing pedestrian wind comfort, air quality, thermal comfort, energy efficiency, and resiliency to extreme wind events. An important limitation is that the accuracy of CFD results can be compromised by the large natural variability and complex physics that are characteristic of urban flow problems. In this talk I will show how uncertainty quantification and data assimilation can be leveraged to evaluate and improve the predictive capabilities of Reynolds-averaged Navier-Stokes simulations for urban flow and dispersion. I will focus on quantifying inflow and turbulence model form uncertainties for two different urban environments: Oklahoma City and Stanford’s campus. For both test cases, the predictive capabilities of the models will be evaluated by comparing the model results to field measurements.

October 7, 2021

Speaker: Yeonjong Shin; Discussant: Dongbin Xiu

TitleMathematical approaches for robustness and reliability in scientific machine learning

Machine learning (ML) has achieved unprecedented empirical success in diverse applications. It now has been applied to solve scientific problems, which has become a new sub-field under the name of Scientific Machine Learning (SciML). Many ML techniques, however, are very sophisticated, requiring trial-and-error and numerous tricks. These result in a lack of robustness and reliability, which are critical factors for scientific applications.
This talk centers around mathematical approaches for SciML to provide robustness and reliability. The first part will focus on the data-driven discovery of dynamical systems. I will present a general framework of designing neural networks (NNs) for the GENERIC formalism, resulting in the GENERIC formalism informed NNs (GFINNs). The framework provides flexible ways of leveraging available physics information into NNs. Also, the universal approximation theorem for GFINNs is established. The second part will be on the Active Neuron Least Squares (ANLS), an efficient training algorithm for NNs. ANLS is designed from the insight gained from the analysis of gradient descent training of NNs, particularly, the analysis of Plateau Phenomenon. The performance of ANLS will be demonstrated and compared with existing popular methods in various learning tasks ranging from function approximation to solving PDEs.

June 25, 2021

Speaker: Jiaxin Zhang; Discussant: Richard Archibald

TitleUncertainty-aware inverse learning using generative flows

Solving inverse problems is a longstanding challenge in mathematics and the natural sciences, where the goal is to determine the hidden parameters given a set of specific observations. Typically, the forward problem going from parameter space to observation space is well-established, but the inverse process is often ill-posed and ambiguous, with multiple parameter sets resulting in the same measurement. Recently, deep invertible architectures have been proposed to solve the reverse problem, but these currently struggle in precisely localizing the exact solutions and in fully exploring the parameter spaces without missing solutions. In this talk, we will present a novel approach leveraging recent advances in normalizing flows and deep invertible neural network architectures for efficiently and accurately solving inverse problems. Given a specific observation and latent space sampling, the learned invertible model provides a posterior over the parameter space; we identify these posterior samples as an implicit prior initialization which enables us to narrow down the search space. We then use gradient descent with backpropagation to calibrate the inverse solutions within a local region. Meanwhile, an exploratory sampling strategy is imposed on the latent space to better explore and capture all possible solutions. We evaluate our approach on analytical benchmark tasks, crystal design in quantum chemistry, image reconstruction in medicine and astrophysics, and find it achieves superior performance compared to several state-of-the-art baseline methods.

May 17, 2021

Speaker: Tan Bui-Thanh; Discussant: Omar Ghattas

TitleModel-aware deep learning approaches for forward and PDE-constrained inverse problems

The fast growth in practical applications of machine learning in a range of contexts has fueled a renewed interest in machine learning methods over recent years. Subsequently, scientific machine learning is an emerging discipline which merges scientific computing and machine learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, machine learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fit with small data and few parameters whereas machine learning models require "big data" and a large number of parameters but are not biased by the validity of prior assumptions. Scientific machine learning endeavours to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. Specifically, it works to develop explainable models that are data-driven but require less data than traditional machine learning methods through the utilization of centuries of scientific literature. The resulting model therefore possesses knowledge that prevents overfitting, reduces the number of parameters, and promotes extrapolatability of the model while still utilizing machine learning techniques to learn the terms that are unexplainable by prior assumptions. We call these hybrid data-driven models as "model-aware machine learning” (MA-ML) methods.
In this talk, we present a few efforts in this MA-ML direction: 1) ROM-ML approach, and 2) Autoencoder-based Inversion (AI) approach. Theoretical results for linear PDE-constrained inverse problems and numerical results various nonlinear PDE-constrained inverse problems will be presented to demonstrate the validity of the proposed approaches.

April 6, 2021

Speaker: Xun Huan; DiscussantHabib Najm

TitleModel-based Sequential Experimental Design

Experiments are indispensable for learning and developing models in engineering and science. When experiments are expensive, a careful design of these limited data-acquisition opportunities can be immensely beneficial. Optimal experimental design (OED), while leveraging the predictive capabilities of a simulation model, provides a statistical framework to systematically quantify and maximize the value of an experiment. We will describe the main ingredients in setting up an OED problem in a general manner, that also captures the synergy among multiple experiments conducted in sequence. We cast this sequential learning problem in a Bayesian setting with information-based utilities and solve it numerically via policy gradient methods from reinforcement learning. 

March 18, 2021

Speaker: Nathaniel Trask; Discussant: Jim Stewart

TitleDeep learning architectures for structure preservation and hp-convergence

Deep learning has attracted attention as a powerful means of developing data-driven models due to its exceptional approximation properties, particularly in high-dimensions. Application to scientific machine learning (SciML) settings however mandate guarantees regarding: convergence, stability of extracted models, and physical realizability. In this talk, we present development of deep learning architectures incorporating ideas from traditional numerical discretization to obtain SciML tools as trustworthy as e.g. finite element discretization of forward problems. In the first half, we demonstrate how ideas from the approximation theory literature can be used to develop partition of unity network (pouNet) architectures which are able to realize hp-convergence for smooth data and < 1% error for piecewise constant data, and may be applied to high-dimensional data with latent low-dimensional structure. In the second half, we establish how ideas from mimetic discretization of PDE may be used to design structure preserving neural networks. The de Rham complex underpinning compatible PDE discretization may be extended to graphs, allowing design of architectures which respect exact sequence requirements, allowing construction of invertible Hodge Laplacians. The resulting "data-driven exterior calculus" provides building blocks for designing robust structure preserving surrogates for elliptic problems with solvability guarantees.

February 10, 2021

Speaker: Rebecca Morrison; Discussant: Youssef Marzouk

Title: Learning Sparse Non-Gaussian Graphical Models

Identification and exploitation of a sparse undirected graphical model (UGM) can simplify inference and prediction processes, illuminate previously unknown variable relationships, and even decouple multi-domain computational models. In the continuous realm, the UGM corresponding to a Gaussian data set is equivalent to the non-zero entries of the inverse covariance matrix. However, this correspondence no longer holds when the data is non-Gaussian. In this talk, we explore a recently developed algorithm called SING (Sparsity Identification of Non-Gaussian distributions), which identifies edges using Hessian information of the log density. Various data sets are examined, with sometimes surprising results about the nature of non-Gaussianity.

 
Members Only (password required)
Discussion Forum