**NUMRAD24** will take place in the week of **June 11th-14th**. The mornings will be devoted to introductory lectures, while more advanced topics will be presented in the afternoon. According to the availability of the participants, poster sessions will also be organized.

## Topics

- Random Partial Differential Equations
- Stochastic Differential Equations
- Optimisation under Uncertainty
- Scientific Machine Learning

## Speakers

- Harbir Antil,
*George Mason University, USA*. - Albert Cohen,
*Sorbonne Université, France*. - Caroline Geiersbach,
*Weierstrass Institute, Germany*. - Siddhartha Mishra,
*ETH Zürich, Switzerland*. - Fabio Nobile,
*EPF Lausanne, Switzerland*. - Michela Ottobre,
*Heriot Watt University, United Kingdom*. - Thomas M. Surowiec,
*Simula Research Laboratory, Norway*. - Aretha Teckentrup,
*University of Edinburgh, United Kingdom*. - Raúl Tempone,
*KAUST, Saudi Arabia, and RWTH Aachen, Germany*. - Gilles Vilmart,
*Université de Genève, Switzerland*. - Jakob Zech,
*Heidelberg University, Germany*.

## Timetable

## Lecture titles & Abstracts

**Optimal linear and non-linear dimensionality reduction**(*A. Cohen*)

Understanding how to optimally approximate general compact sets by finite dimensional spaces is of central interest for designing efficient numerical methods in forward simulation or inverse problems. The concept of n-width, introduced in 1936 by Kolmogorov, is well taylored to linear approximation methods. The interest for n-width has recently been revived by the approximation of parametrized/stochastic PDEs, and the development of reduced basis methods. We will survey some of these recent results. We then focus on analogous concept for nonlinear approximation which are still the object of current research, motivated in particular by the development of neural networks, and possible application to hyperbolic parametrized PDEs for which linear methods are not effective. We shall discuss a general framework that allows to embrace various concepts of linear and nonlinear widths, and present some recent results and relevant open problems.

**Numerical approximation of random partial differential equations**(*F. Nobile*)

Random input data, such as coefficients, forcing terms, boundary conditions, etc. in partial differential equation (PDE) models are often considered in engineering applications to address intrinsic variability or lack of precise characterization of the model’s behaviour. Monte Carlo is the reference method to approximate statistics of the PDE solution or related output quantities of interest. However its cost remains extremely high to achieve high accuracy. In this talk we review alternative approximation methods that rely on the often smooth parameter-to-solution map, such as polynomial approximations based either on Galerkin projection of the underlying equation or on evaluations of the PDE solution in suitably chosen collocation points. Making these methods scalable with respect to the number of random parameters is challenging and requires a careful choice of the approximation space and interpolation points leading to sparse grid constructions.

**Multilevel Monte Carlo methods for random partial differential equations**(*A. Teckentrup*)

Multilevel Monte Carlo methods have become increasingly popular over the last two decades, due to their ease of implementation and their ability to significantly outperform standard Monte Carlo approaches in complex simulation tasks. In this talk, we will show how the multilevel methodology can be applied to modelling and simulation using partial differential equations. We will further discuss suitable sampling methods for random fields and how smoothing can be incorporated in the coarse levels of the multilevel estimator.

**An Introduction to Optimization under Uncertainty**(*T. Surowiec*)

The purpose of this course is to give a broad overview of optimization under uncertainty (OUU) in the context of PDE-constrained optimization. Some parts will be more rigorous than others and Part II will have a special focus on computation.

*Part I : Models, Risk Aversion, Sampling*. We begin by briefly reviewing the basic workflow of PDE-constrained optimization and how this changes when we introduce uncertainty into the models. Afterwards, we will see various ways of including risk aversion in optimization models, for instance, robust optimization and the use of risk measures. The first part then closes with a deeper look at what we are actually faced with when we wish to solve these problems numerically. This includes a discussion of the role of sampling, both when to sample and how that affects the overall algorithm as well as the efficient computation of gradients and Hessian vector products, which are essential for efficient optimization algorithms.

*Part II : Stability, Algorithms, and Computational Statistics*. Part II begins with a detailed discussion of what we mean by “stability” in OUU. For a model linear-quadratic risk-neutral problem, we develop a theory of stability using the method of probability metrics. Building on the stability results, we obtain asymptotic convergence statements for classical Monte Carlo approximations that indicate how the optimal values and optimal solutions behave as the sample size increases. Continuing in this setting, we turn to computational aspects. The semismooth Newton method is presented in detail. Finally, we investigate how the theoretical results manifest in practice by computing experimental rates of convergence and using subsampling bootstrapped confidence intervals.

**Stochastic approximation for PDE-constrained optimization under uncertainty**(*C. Geiersbach*)

The focus of this lecture is a class of risk-neutral optimization problems where the state is a solution to an underlying random PDE. This problem will first be embedded into a stochastic optimization framework, where theory is now classical for the finite-dimensional case. Challenges in connection to establishing optimality conditions will be highlighted, especially in the context where the state is subject to additional constraints. The second part of the lecture is dedicated to stochastic approximation methods for solving these problems. Techniques for proving convergence are presented along with convergence rates. Numerical error can be accounted for and adequately controlled as part of the optimization methods. For state-constrained problems, Moreau-Yosida regularization can be employed to transform the problem into a sequence of subproblems that can also be handled by stochastic approximation techniques.

**Digital Twins and Optimisation Under Uncertainty — Compression and Decomposition**(*H. Antil*)

This lecture will begin by describing the role of optimisation under uncertainty and risk-measures in various applications. Examples include, digital twins to identify weakness in structures, biomedical applications to create a digital twin population, and neuromorphic imaging, to sample data at micro-scale and detect objects which are otherwise invisible to traditional cameras.

The lecture is divided into 4 main parts. The first part will focus on a new algorithm called Tensor Train Risk (TTRISK) to solve high-dimensional risk-averse optimisation problems governed by (ODEs and/or PDEs) under uncertainty. Both full and reduced formulations are considered. The focus is on low rank tensor approximations of random fields discretized using stochastic collocation. The nonsmooth risk-measures are smoothed and an adaptive strategy is developed to select the smoothing parameter. The adaptive approach balances the smoothing and tensor approximation errors.

Secondly, such problems with additional almost sure constraints on the state variable are considered. The constraints are handled using a penalty approach. Theoretical bounds on the constraint violation in terms of penalty parameter are established. Again, tensor-train decomposition strategies will be used to solve the problems.

In both first and second parts, control / design parameters are taken to be deterministic. In the third part of the lecture, through various applications, the role of sparsity in stochastic control/design variables will be explored.

Finally, various realistic applications are considered. A particular focus will be on the use of risk-measures in digital twins. If time permits, novel notion of neuromorphic imaging will also be introduced.

**Stiff integrators for stochastic (partial) differential equations**(*G. Vilmart*)

In this lecture, we present the design and analysis of efficient integrators for stiff stochastic problems, including the long time integration of ergodic stochastic differential equations and the stochastic heat equation.

We introduce not only implicit Runge-Kutta type methods, but also recent advances in explicit stabilized integrators, which are a popular alternative to avoid the severe timestep restrictions faced by standard explicit integrators.

**Uniform in time (numerical) approximations of Stochastic Differential Equations**(*M. Ottobre*)

Complicated models, for which a detailed analysis is too far out of reach, are routinely approximated via a variety of procedures, for example by use of numerical schemes. When using a numerical scheme we make an error which is small over small time-intervals but it typically compounds over longer time-horizons. Hence, in general, the approximation error grows in time so that the results of our simulations are less reliable when the simulation is run for longer. However this is not necessarily the case and one may be able to find dynamics and corresponding approximation procedures for which the error remains bounded, uniformly in time. We will discuss some criteria and approaches to understand when this is possible and present a method in particular, based on the analysis of derivatives of Markov semigroups. We will demonstrate how such a method is very general, and can be used when the approximation is produced by numerical schemes, particle methods or multiscale methods, to mention just a few.

(**Weak and strong approximation for McKean-Vlasov Stochastic Differential Equations***R. Tempone*)

This talk considers the numerical approximation of McKean-Vlasov stochastic differential equations (MV-SDEs), including addressing computations with rare events. The MV-SDEs are crucial to model systems where the dynamics of each particle or agent depends not only on its own state but also on the distribution of the states of all other particles in the system. This characteristic makes them particularly useful when interactions among many components lead to complex collective behaviors. Several applications pertaining to various domains can be modeled with McKean-Vlasov stochastic differential equations, such as Systemic Risk Modeling (Financial Mathematics), Epidemiology (Biological Sciences), Energy Markets (Engineering), Opinion Dynamics (Social sciences). Mathematically, the time evolution of the state governed by MV-SDEs depends not just on its state but also on the law of the state. In the first part of the talk, we employ a system of interacting stochastic particles as an approximation of the McKean–Vlasov equation and utilize classical stochastic analysis tools, namely Itô’s formula and Kolmogorov–Chentsov continuity theorem, to prove the existence and uniqueness of strong solutions for a broad class of McKean–Vlasov equations as a limit of the conditional expectation of exchangeable particles. Considering an increasing number of particles in the approximating stochastic particle system, we also prove the Lp strong convergence rate and derive the weak convergence rates using the Kolmogorov backward equation and variations of the stochastic particle system. In this case, there are two discretization parameters: the number of time steps and the number of particles. Using these two parameters, we consider different variants of the Monte Carlo (MC), multilevel Monte Carlo (MLMC), and multi-index Monte Carlo (MIMC) methods. We characterize the optimal work complexity of MIMC, and show show that to achieve this complexity, one uses a partitioning estimator to correlate samples of systems with different number of particles. In the last part of the talk, we will discuss briefly importance sampling techniques for rare events associated with the MV-SDE based on stochastic optimal control and hierarchical sampling techniques (MLMC, MIMC). Numerical experiments show that the proposed importance sampling substantially reduces the Monte Carlo estimator’s variance, resulting in a lower computational cost in the rare event regime than standard Monte Carlo estimators.

**Neural Networks for UQ**(*J. Zech*)

In recent years, neural networks have emerged as a powerful tool for tackling computational problems traditionally addressed by numerical algorithms in scientific computing. A key advantage of neural networks is their ability to effectively handle high-dimensionality, which naturally arises in uncertainty quantification (UQ). This lecture will cover some basic results of neural network theory in the context of solving parametric partial differential equations and the computation of surrogate models. Specifically we examine the approximation capabilites of neural networks and discuss their potential to overcome the curse of dimension. Time permitting, we additionally address the use of neural networks for solving Bayesian inverse problems.

**Learning Solutions of PDEs**(*S. Mishra*)

PDEs are ubiquitous as mathematical models in the sciences and engineering. Currently, numerical schemes are the main tools for the approximation and simulation of PDEs. However, given their computational expense, there has been considerable interest in recent years for realizing fast surrogates for numerical methods by learning the solutions of PDEs, directly from data. We review this field which is often termed as operator learning by discussing its main concepts, models and data sets. Models ranging from neural operators to multi-scale vision transformers and denoising diffusion models will be presented. These models will be evaluated through extensive numerical experiments as well as rigorous mathematical guarantees.