Technical Thrust Area
Uncertainty Quantification and Probabilistic Modeling
Chair: Serge Prudhomme, Polytechnique Montreal
ViceChair: Johann Guilleminot, Duke University
MembersatLarge: JianXun Wang, Notre Dame University
Members
Upcoming Events Monthly Webinar
Webinars will resume Spring 2023. Past Webinars
May 12, 2022 Speaker: Jianxun Wang, University of Notre Dame; Discussant: Michael Brenner, Harvard University Title: Leveraging PhysicsInduced Bias in Scientific Machine Learning for Computational Mechanics Firstprinciple modeling and simulation of complex systems based on partial differential equations (PDEs) and numerical discretization have been developed for decades and achieved great success. Nonetheless, traditional numerical solvers face significant challenges in many practical scenarios, e.g., inverse problems, uncertainty quantification, design, and optimizations. Moreover, for complex systems, the governing equations might not be fully known due to a lack of complete understanding of the underlying physics, for which a firstprincipled numerical solver cannot be built. Recent advances in data science. and machine learning, combined with the everincreasing availability of highfidelity simulation and measurement data, open up new opportunities for developing dataenabled computational mechanics models. Although the stateoftheart machine/deep learning techniques hold great promise, there are still many challenges  e.g., requirement of “big data”, the challenge in generalizability/extrapolibity, lack of interpretability/explainability, etc. On the other hand, there is often a richness of prior knowledge of the systems, including physical laws and phenomenological principles, which can be leveraged in this regard. Thus, there is an urgent need for fundamentally new and transformative machine learning techniques, closely grounded in physics, to address the aforementioned challenges in computational mechanics problems. This talk will briefly discuss our recent developments of scientific machine learning for computational mechanics, focusing on several different aspects of how to bake physicsinduced bias into machine/deep learning models for dataenabled predictive modeling. Specifically, the following topics will be covered: (1) PDEstructure preserved deep learning, where the neural network architectures are built by preserving mathematical structures of the (partially) known governing physics for predicting spatiotemporal dynamics, (2) physicsinformed geometric deep learning for predictive modeling involving complex geometries and irregular domains. Bio: Dr. Jianxun Wang is an assistant professor of Aerospace and Mechanical Engineering at the University of Notre Dame. He received a Ph.D. in Aerospace Engineering from Virginia Tech in 2017 and was a postdoctoral scholar at UC Berkeley before joining Notre Dame in 2018. He is a recipient of the 2021 NSF CAREER Award. His research focuses on scientific machine learning, dataenabled computational modeling, Bayesian data assimilation, and uncertainty quantification. May 5, 2022 Speaker: Ioannis Kougioumtzoglou, Columbia University; Discussant: George Deodatis, Columbia University Title: Path Integrals in Stochastic Engineering Dynamics Everincreasing computational capabilities, development of potent signal processing tools, as well as advanced experimental setups have contributed to a highly sophisticated modeling of engineering systems and related excitations. As a result, the form of the governing equations has become highly complex from a mathematics perspective. Examples include high dimensionality, complex nonlinearities, joint timefrequency representations, as well as generalized/fractional calculus modeling. In many cases even the deterministic solution of such equations is an open issue and an active research topic. Clearly, solving the stochastic counterparts of these equations becomes orders of magnitude more challenging. To address this issue, the speaker and coworkers have developed recently a solution framework, based on the concept of Wiener path integral, for stochastic response analysis and reliability assessment of diverse dynamical systems of engineering interest. Significant novelties and advantages that will be highlighted in this talk include: Bio: Prof. Ioannis A. Kougioumtzoglou received his fiveyear Diploma in Civil Engineering from the National Technical University of Athens (NTUA) in Greece (2007), and his M.Sc. (2009) and Ph.D. (2011) degrees in Civil Engineering from Rice University, TX, USA. He joined Columbia University in 2014, where he is currently an Associate Professor in the Department of Civil Engineering & Engineering Mechanics. He is the author of approximately 150 publications, including more than 80 technical papers in archival International Journals. Prof. Kougioumtzoglou was chosen in 2018 by the National Science Foundation (NSF) to receive the CAREER Award, which recognizes earlystage scholars with high levels of promise and excellence. He is also the 2014 European Association of Structural Dynamics (EASD) Junior Research Prize recipient “for his innovative influence on the field of nonlinear stochastic dynamics”. Prof. Kougioumtzoglou is an Associate Editor for the ASCEASME Journal of Risk and Uncertainty in Engineering Systems and an Editorial Board Member of the following Journals: Mechanical Systems and Signal Processing, Probabilistic Engineering Mechanics, and International Journal of NonLinear Mechanics. He is also a coEditor of the Encyclopedia of Earthquake Engineering (Springer) and has served as a Guest Editor for several Special Issues in International Journals. Prof. Kougioumtzoglou has cochaired the ASCE Engineering Mechanics Institute Conference 2021 and Probabilistic Mechanics & Reliability Conference 2021 (EMI 2021 / PMC 2021) and has served in the scientific and/or organizing committees of many international technical conferences. Prof. Kougioumtzoglou is a member both of the American Society of Civil Engineers (M.ASCE) and the American Society of Mechanical Engineers (M.ASME), while he currently serves as a member of the ASCE EMI committees on Dynamics and on Probabilistic Methods. He is a Licensed Professional Civil Engineer in Greece, and a Fellow of the Higher Education Academy (FHEA) in the UK. March 16, 2022 Speaker: Danial Faghihi, University at Buffalo; Discussant: J. Tinsley Oden, The University of Texas at Austin Title: Toward Selecting Optimal Predictive Computational Models An overriding importance in the scientific prediction of complex physical systems is validating mechanistic models in the presence of uncertainties. In addition to data uncertainty and numerical error, uncertainties in selecting the optimal model formulation pose a significant challenge to predictive computational modeling. In a Bayesian setting, the choice of models for computational prediction relies on the available observational data and prior belief on the model and its parameters. This talk discusses a systematic framework in selecting an “optimal” predictive model, among the numerous possible models with different fidelities and complexities, that delivers sufficiently accurate computational prediction. In particular, we extend an adaptive computational framework, known as OccamPlausibility ALgorithm (OPAL), that leverages Bayesian inference and the notion of model plausibility to select the simplest valid model. The key feature of our modification is the design of modelspecific validation experiments to provide observational data reflecting, in some sense, the structure of the target prediction. An application of this framework in selecting an optimal discretetocontinuum multiscale model for predicting the performance of microscale materials systems will be presented. We will also provide an example of leveraging validated and selected models for predicting heterogeneous tumor morphology in specific subjects and via a scalable solution algorithm to the highdimensional Bayesian inference. Finally, we will discuss challenges and possible future directions to develop strategies for selecting optimal neural networks in the context of hybrid physical  machine learning multiscale models of mesoporous thermal insulation materials. February 23, 2022 Speaker: Johann Guilleminot, Duke University; Discussant: Roger Ghanem, University of Southern California In this talk, we discuss the construction of admissible, physicsconsistent and identifiable stochastic models for uncertainty quantification. We consider the case of variables taking values in constrained spaces (with boundaries defined by manifolds) and indexed by complex geometries described by nonconvex sets. This setting is relevant to a broad variety of applications in computational mechanics, ranging from mechanical simulations on parts produced by additive manufacturing to multiscale analyses with stochastic connected phases. We first present theoretical and computational procedures to ensure wellposedness and to generate random field representations defined by arbitrary marginal transport maps. The sampling scheme relies, in particular, on the construction of a family of stochastic differential equations driven by an ad hoc spacetime process and involves an adaptive step sequence that ensures stability near the boundaries of the state space. Finally, we provide results pertaining to modeling, sampling, and statistical inverse identification for various applications including additive manufacturing, phasefield fracture modeling, multiscale analyses on nonlinear microstructures, and patientspecific computations on soft biological tissues. December 8, 2021 Speaker: Audrey Olivier, University of Southern California; Discussant: Lori GrahamBrady, Johns Hopkins University Title: Bayesian Learning of Neural Networks for Small or Imbalanced Data Sets Databased predictive models such as neural networks are showing great potential to be used in various scientific and engineering fields. They can be used in conjunction with physicsbased models to account for missing or hardtomodel physics, or as surrogates to replace highfidelity, overly costly physicsbased simulations. However, in many engineering fields data is expensive to obtain and data scarcity and / or data imbalance is a challenge. Many physical processes are also random in nature and exhibit large aleatory uncertainties. Bayesian methods allow for a comprehensive account of both aleatory and epistemic uncertainties; however, they are challenging to use for overly parameterized problems such as neural networks. This talk will present methods based on variational inference and model averaging for probabilistic training of neural networks. An application in surrogate materials modeling will be presented, where data is scarce as it is obtained from expensive highfidelity materials simulations. Finally, we will show how this probabilistic approach allows to integrate scientific intuitions by defining a meaningful prior and likelihood for training. The example presented pertains to the prediction of ambulance travel time, using real data provided by the New York City Fire Department. November 4, 2021 Speaker: Catherine Gorlé; Discussant: Gianluca Iaccarino Title: Uncertainty Quantification and Data Assimilation for Predictive Computational Wind Engineering Computational fluid dynamics (CFD) can inform sustainable design of buildings and cities in terms of optimizing pedestrian wind comfort, air quality, thermal comfort, energy efficiency, and resiliency to extreme wind events. An important limitation is that the accuracy of CFD results can be compromised by the large natural variability and complex physics that are characteristic of urban flow problems. In this talk I will show how uncertainty quantification and data assimilation can be leveraged to evaluate and improve the predictive capabilities of Reynoldsaveraged NavierStokes simulations for urban flow and dispersion. I will focus on quantifying inflow and turbulence model form uncertainties for two different urban environments: Oklahoma City and Stanford’s campus. For both test cases, the predictive capabilities of the models will be evaluated by comparing the model results to field measurements. October 7, 2021 Speaker: Yeonjong Shin; Discussant: Dongbin Xiu Title: Mathematical approaches for robustness and reliability in scientific machine learning Machine learning (ML) has achieved unprecedented empirical success in diverse applications. It now has been applied to solve scientific problems, which has become a new subfield under the name of Scientific Machine Learning (SciML). Many ML techniques, however, are very sophisticated, requiring trialanderror and numerous tricks. These result in a lack of robustness and reliability, which are critical factors for scientific applications. June 25, 2021 Speaker: Jiaxin Zhang; Discussant: Richard Archibald Title: Uncertaintyaware inverse learning using generative flows Solving inverse problems is a longstanding challenge in mathematics and the natural sciences, where the goal is to determine the hidden parameters given a set of specific observations. Typically, the forward problem going from parameter space to observation space is wellestablished, but the inverse process is often illposed and ambiguous, with multiple parameter sets resulting in the same measurement. Recently, deep invertible architectures have been proposed to solve the reverse problem, but these currently struggle in precisely localizing the exact solutions and in fully exploring the parameter spaces without missing solutions. In this talk, we will present a novel approach leveraging recent advances in normalizing flows and deep invertible neural network architectures for efficiently and accurately solving inverse problems. Given a specific observation and latent space sampling, the learned invertible model provides a posterior over the parameter space; we identify these posterior samples as an implicit prior initialization which enables us to narrow down the search space. We then use gradient descent with backpropagation to calibrate the inverse solutions within a local region. Meanwhile, an exploratory sampling strategy is imposed on the latent space to better explore and capture all possible solutions. We evaluate our approach on analytical benchmark tasks, crystal design in quantum chemistry, image reconstruction in medicine and astrophysics, and find it achieves superior performance compared to several stateoftheart baseline methods. May 17, 2021 Speaker: Tan BuiThanh; Discussant: Omar Ghattas Title: Modelaware deep learning approaches for forward and PDEconstrained inverse problems The fast growth in practical applications of machine learning in a range of contexts has fueled a renewed interest in machine learning methods over recent years. Subsequently, scientific machine learning is an emerging discipline which merges scientific computing and machine learning. Whilst scientific computing focuses on largescale models that are derived from scientific laws describing physical phenomena, machine learning focuses on developing datadriven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fit with small data and few parameters whereas machine learning models require "big data" and a large number of parameters but are not biased by the validity of prior assumptions. Scientific machine learning endeavours to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. Specifically, it works to develop explainable models that are datadriven but require less data than traditional machine learning methods through the utilization of centuries of scientific literature. The resulting model therefore possesses knowledge that prevents overfitting, reduces the number of parameters, and promotes extrapolatability of the model while still utilizing machine learning techniques to learn the terms that are unexplainable by prior assumptions. We call these hybrid datadriven models as "modelaware machine learning” (MAML) methods. April 6, 2021 Speaker: Xun Huan; Discussant: Habib Najm Title: Modelbased Sequential Experimental Design Experiments are indispensable for learning and developing models in engineering and science. When experiments are expensive, a careful design of these limited dataacquisition opportunities can be immensely beneficial. Optimal experimental design (OED), while leveraging the predictive capabilities of a simulation model, provides a statistical framework to systematically quantify and maximize the value of an experiment. We will describe the main ingredients in setting up an OED problem in a general manner, that also captures the synergy among multiple experiments conducted in sequence. We cast this sequential learning problem in a Bayesian setting with informationbased utilities and solve it numerically via policy gradient methods from reinforcement learning. March 18, 2021 Speaker: Nathaniel Trask; Discussant: Jim Stewart Title: Deep learning architectures for structure preservation and hpconvergence Deep learning has attracted attention as a powerful means of developing datadriven models due to its exceptional approximation properties, particularly in highdimensions. Application to scientific machine learning (SciML) settings however mandate guarantees regarding: convergence, stability of extracted models, and physical realizability. In this talk, we present development of deep learning architectures incorporating ideas from traditional numerical discretization to obtain SciML tools as trustworthy as e.g. finite element discretization of forward problems. In the first half, we demonstrate how ideas from the approximation theory literature can be used to develop partition of unity network (pouNet) architectures which are able to realize hpconvergence for smooth data and < 1% error for piecewise constant data, and may be applied to highdimensional data with latent lowdimensional structure. In the second half, we establish how ideas from mimetic discretization of PDE may be used to design structure preserving neural networks. The de Rham complex underpinning compatible PDE discretization may be extended to graphs, allowing design of architectures which respect exact sequence requirements, allowing construction of invertible Hodge Laplacians. The resulting "datadriven exterior calculus" provides building blocks for designing robust structure preserving surrogates for elliptic problems with solvability guarantees. February 10, 2021 Speaker: Rebecca Morrison; Discussant: Youssef Marzouk Title: Learning Sparse NonGaussian Graphical Models Identification and exploitation of a sparse undirected graphical model (UGM) can simplify inference and prediction processes, illuminate previously unknown variable relationships, and even decouple multidomain computational models. In the continuous realm, the UGM corresponding to a Gaussian data set is equivalent to the nonzero entries of the inverse covariance matrix. However, this correspondence no longer holds when the data is nonGaussian. In this talk, we explore a recently developed algorithm called SING (Sparsity Identification of NonGaussian distributions), which identifies edges using Hessian information of the log density. Various data sets are examined, with sometimes surprising results about the nature of nonGaussianity.
Members Only (password required)
Discussion Forum
