Technical Thrust Area
Uncertainty Quantification and Probabilistic Modeling

Committee
 
Chair: Abani Patra, Tufts University
Vice-Chair: Serge Prudhomme, Polytechnique Montréal
Members-at-Large:
Johann Guilleminot, Duke University
Jian-Xun Wang, Notre Dame University

Members
Not a member? Click here to learn how to become one.

 

Upcoming Events

USACM Thematic Conference

Uncertainty Quantification for Machine Learning Integrated Physics Modeling (MLIP), August 18-19, 2022, Arlington, Virginia

Monthly Webinar

November 4; 3-4pm EDT - Participate via Zoom: https://us06web.zoom.us/j/92756548524?pwd=cTFoRXIvNVN4dVFoaHEzK0pQQjhldz09 (Meeting ID: 927 5654 8524/Passcode: 934745) 

Speaker: Catherine Gorlé; Discussant: Gianluca Iaccarino

Title: Uncertainty Quantification and Data Assimilation for Predictive Computational Wind Engineering

Computational fluid dynamics (CFD) can inform sustainable design of buildings and cities in terms of optimizing pedestrian wind comfort, air quality, thermal comfort, energy efficiency, and resiliency to extreme wind events. An important limitation is that the accuracy of CFD results can be compromised by the large natural variability and complex physics that are characteristic of urban flow problems. In this talk I will show how uncertainty quantification and data assimilation can be leveraged to evaluate and improve the predictive capabilities of Reynolds-averaged Navier-Stokes simulations for urban flow and dispersion. I will focus on quantifying inflow and turbulence model form uncertainties for two different urban environments: Oklahoma City and Stanford’s campus. For both test cases, the predictive capabilities of the models will be evaluated by comparing the model results to field measurements.


Past Webinars

October 7, 2021

Speaker: Yeonjong Shin; Discussant: Dongbin Xiu

TitleMathematical approaches for robustness and reliability in scientific machine learning

Machine learning (ML) has achieved unprecedented empirical success in diverse applications. It now has been applied to solve scientific problems, which has become a new sub-field under the name of Scientific Machine Learning (SciML). Many ML techniques, however, are very sophisticated, requiring trial-and-error and numerous tricks. These result in a lack of robustness and reliability, which are critical factors for scientific applications.
This talk centers around mathematical approaches for SciML to provide robustness and reliability. The first part will focus on the data-driven discovery of dynamical systems. I will present a general framework of designing neural networks (NNs) for the GENERIC formalism, resulting in the GENERIC formalism informed NNs (GFINNs). The framework provides flexible ways of leveraging available physics information into NNs. Also, the universal approximation theorem for GFINNs is established. The second part will be on the Active Neuron Least Squares (ANLS), an efficient training algorithm for NNs. ANLS is designed from the insight gained from the analysis of gradient descent training of NNs, particularly, the analysis of Plateau Phenomenon. The performance of ANLS will be demonstrated and compared with existing popular methods in various learning tasks ranging from function approximation to solving PDEs.

June 25, 2021

Speaker: Jiaxin Zhang; Discussant: Richard Archibald

TitleUncertainty-aware inverse learning using generative flows

Solving inverse problems is a longstanding challenge in mathematics and the natural sciences, where the goal is to determine the hidden parameters given a set of specific observations. Typically, the forward problem going from parameter space to observation space is well-established, but the inverse process is often ill-posed and ambiguous, with multiple parameter sets resulting in the same measurement. Recently, deep invertible architectures have been proposed to solve the reverse problem, but these currently struggle in precisely localizing the exact solutions and in fully exploring the parameter spaces without missing solutions. In this talk, we will present a novel approach leveraging recent advances in normalizing flows and deep invertible neural network architectures for efficiently and accurately solving inverse problems. Given a specific observation and latent space sampling, the learned invertible model provides a posterior over the parameter space; we identify these posterior samples as an implicit prior initialization which enables us to narrow down the search space. We then use gradient descent with backpropagation to calibrate the inverse solutions within a local region. Meanwhile, an exploratory sampling strategy is imposed on the latent space to better explore and capture all possible solutions. We evaluate our approach on analytical benchmark tasks, crystal design in quantum chemistry, image reconstruction in medicine and astrophysics,  and find it achieves superior performance compared to several state-of-the-art baseline methods.

May 17, 2021

Speaker: Tan Bui-Thanh; Discussant: Omar Ghattas

TitleModel-aware deep learning approaches for forward and PDE-constrained inverse problems

The fast growth in practical applications of machine learning in a range of contexts has fueled a renewed interest in machine learning methods over recent years. Subsequently, scientific machine learning is an emerging discipline which merges scientific computing and machine learning. Whilst scientific computing focuses on large-scale models that are derived from scientific laws describing physical phenomena, machine learning focuses on developing data-driven models which require minimal knowledge and prior assumptions. With the contrast between these two approaches follows different advantages: scientific models are effective at extrapolation and can be fit with small data and few parameters whereas machine learning models require "big data" and a large number of parameters but are not biased by the validity of prior assumptions. Scientific machine learning endeavours to combine the two disciplines in order to develop models that retain the advantages from their respective disciplines. Specifically, it works to develop explainable models that are data-driven but require less data than traditional machine learning methods through the utilization of centuries of scientific literature. The resulting model therefore possesses knowledge that prevents overfitting, reduces the number of parameters, and promotes extrapolatability of the model while still utilizing machine learning techniques to learn the terms that are unexplainable by prior assumptions. We call these hybrid data-driven models as "model-aware machine learning” (MA-ML) methods.
In this talk, we present a few efforts in this MA-ML direction: 1) ROM-ML approach, and 2) Autoencoder-based Inversion (AI) approach. Theoretical results for linear PDE-constrained inverse problems and numerical results various nonlinear PDE-constrained inverse problems will be presented to demonstrate the validity of the proposed approaches.

April 6, 2021

Speaker: Xun Huan; DiscussantHabib Najm

TitleModel-based Sequential Experimental Design

Experiments are indispensable for learning and developing models in engineering and science. When experiments are expensive, a careful design of these limited data-acquisition opportunities can be immensely beneficial. Optimal experimental design (OED), while leveraging the predictive capabilities of a simulation model, provides a statistical framework to systematically quantify and maximize the value of an experiment. We will describe the main ingredients in setting up an OED problem in a general manner, that also captures the synergy among multiple experiments conducted in sequence. We cast this sequential learning problem in a Bayesian setting with information-based utilities, and solve it numerically via policy gradient methods from reinforcement learning. 

March 18, 2021

Speaker: Nathaniel Trask; Discussant: Jim Stewart

TitleDeep learning architectures for structure preservation and hp-convergence

Deep learning has attracted attention as a powerful means of developing data-driven models due to its exceptional approximation properties, particularly in high-dimensions. Application to scientific machine learning (SciML) settings however mandate guarantees regarding: convergence, stability of extracted models, and physical realizability. In this talk, we present development of deep learning architectures incorporating ideas from traditional numerical discretization to obtain SciML tools as trustworthy as e.g. finite element discretization of forward problems. In the first half, we demonstrate how ideas from the approximation theory literature can be used to develop partition of unity network (pouNet) architectures which are able to realize hp-convergence for smooth data and < 1% error for piecewise constant data, and may be applied to high-dimensional data with latent low-dimensional structure. In the second half, we establish how ideas from mimetic discretization of PDE may be used to design structure preserving neural networks. The de Rham complex underpinning compatible PDE discretization may be extended to graphs, allowing design of architectures which respect exact sequence requirements, allowing construction of invertible Hodge Laplacians. The resulting "data-driven exterior calculus" provides building blocks for designing robust structure preserving surrogates for elliptic problems with solvability guarantees.

February 10, 2021

Speaker: Rebecca Morrison; Discussant: Youssef Marzouk

Title: Learning Sparse Non-Gaussian Graphical Models

Identification and exploitation of a sparse undirected graphical model (UGM) can simplify inference and prediction processes, illuminate previously unknown variable relationships, and even decouple multi-domain computational models. In the continuous realm, the UGM corresponding to a Gaussian data set is equivalent to the non-zero entries of the inverse covariance matrix. However, this correspondence no longer holds when the data is non-Gaussian. In this talk, we explore a recently developed algorithm called SING (Sparsity Identification of Non-Gaussian distributions), which identifies edges using Hessian information of the log density. Various data sets are examined, with sometimes surprising results about the nature of non-Gaussianity.

 
Members Only (password required)
Discussion Forum