Invited Speakers
Multilevel Monte Carlo for Lévydriven SDEs
Keywords: Multilevel MonteCarlo; Stochastic differential equations; Lévy process; Numerical integration
The introduction of multilevel methods for stochastic differential equations had a significant impact on the way we treat numerical integration problems nowadays. Typically, a multilevel method is based on a sequence of approximations that increases in precision but also in complexity. Its superior efficiency is caused by an elegant use of the whole hierarchy of approximations. In this talk we analyse multilevel MonteCarlo for Lévydriven stochastic differential equations (SDEs). There are various natural choices for the family of approximations and we discuss the efficiency of certain jumpadapted Euler schemes. We give upper bounds for the error in the computation of expectations of pathdependent functionals. Further, we point out the practical difficulties arising from the discontinuities of the driving process. We then turn to a central limit theorem for a particular family of approximations and discuss its implications on the multilevel scheme, in particular, on the choice of good parameters. 
Creating unbiased Monte Carlo schemes from biased ones: Theory and applications
Keywords: Monte Carlo; Stochastic differential equations; Markov chain Monte Carlo
In many Monte Carlo settings, one wishes to compute the expectation of a random object which can not be generated in finite time. In such settings, it is often the case that one can instead compute approximations to the random object, where the computer time required to generate the approximation is increasing in the quality of the approximation. An example of such a problem context is that of stochastic differential equations (SDEs), where the approximation is typically obtained via an appropriate discretization of the equation. Of course, when such approximations are used, the resulting estimators are generally biased. We show that in the presence of an appropriate coupling of the sequence of approximations, one can create new estimators that are unbiased. These new unbiased estimators often enjoy much better rates of convergence than do the underlying biased schemes. Furthermore, because the expectation can then be computed by averaging independent unbiased samples, the wide range of output analysis methods available in the presence of conventional Monte Carlo are applicable. In the SDE setting, such unbiased schemes are closely related to multilevel Monte Carlo. We discuss this new class of unbiased estimators in the SDE setting, that of Markov chain Monte Carlo, and several other problem contexts. 
Path space MCMC methods in computer graphics
Keywords: computer graphics; rendering; path space; MCMC
The objective of a rendering algorithm is to create a photograph of a simulated reality, which entails finding all the paths along which light can flow from a set of light sources to the camera. We will begin with an overview of the underlying physics and analyze how it leads to a high dimensional integration problem that is typically solved using Monte Carlo methods. Afterward, we will focus on a specific algorithm, a fairly unusual Markov Chain Monte Carlo method that operates on a state space consisting of light paths. In many settings, this state space collapses to a lowerdimensional submanifold, and we will review an extension of the MCMC method that is informed by the high dimensional differential geometry of this submanifold. Although MCMC methods have many desirable properties, there are still fundamental problems that currently impede adoption by the movie industry. In the last part, we will review a range of issues and conclude with an outlook on future research. 
Walsh Figure of Merit (WAFOM) for digital nets: An easy measure for higher order convergent QMC
Keywords: QMC integration; low discrepancy point sets; digital nets; WalshFourier expansion; WAFOM
Let $f$ be a function defined on a hypercube $I$. QuasiMonte Carlo integration (QMC) of $f$ by a (finite) point set $P$ in $I$ is to use the average of $f$ over $P$ as an approximation of the integration of $f$ over $I$. Classical KoksmaHlawka inequality shows that the QMC integration error is bounded by the product of "the variation of $f$" and "discrepancy of $P$ from the uniformity", which justifies to find and keep one $P$ with small discrepancy and to use it for various integrands. The corresponding $*$discrepancy decreases in $O((\log P)^{s1}/P)$. Recently, Josef Dick introduced the notion of "higher order convergent" point set: for a smoother function $f$, he proved a KoksmaHlawka type inequality such that the corresponding "discrepancy of $P$" decreases in $O(P^{a})$ for $a$smooth functions (hence the name of higher order convergent). We name (a discrete version of) Dick's discrepancy as Walsh Figure of Merit (WAFOM) of $P$, and show a fast way to compute them, by WalshFourier inversion (a joint work with Saito and Matoba). An analogous method gives a fast algorithm to compute $t$value (joint work with Josef Dick). We shall explain the theory of WAFOM and experimental results on comparison with other QMC. 
Vandermonde nets: A new family of digital nets
Keywords: lowdiscrepancy point sets; digital nets
We introduce a family of digital nets for which the generating matrices have, in a certain sense, a Vandermonde structure and which are therefore called Vandermonde nets. There is some analogy with the theory of polynomial lattice point sets, for instance, good Vandermonde nets can be obtained by CBC constructions. An important difference is that explicit constructions of good Vandermonde nets are possible for higher dimensions than for polynomial lattice point sets. Vandermonde nets do not, in general, belong to the large and wellknown family of hyperplane nets and form a truly new family of digital nets. 
Some results on the complexity of numerical integration
Keywords: optimal algorithms and error bounds
We present some results on the complexity of numerical integration. We start with the seminal paper of Bakhvalov (1959) and end with new results on the curse of dimension, the complexity of oscillatory integrals, and on randomized algorithms. 
Fully automated Approximative Bayesian Computation
Keywords: ABC
Approximate Bayesian computation has been developed in the past fifteen years to handle complex (if welldefined) stochastic models. When the datasets involved in such models are of large dimensions, it is necessary (for efficiency reasons) to reduce the data down to a vector of statistics, In most situations, this vector is not a sufficient statistic and there is no automated derivation of such a vector. Paul Fearnhead and Dennis Prangle made some advances in the derivation of the relevant statistics in a 2012 (RSS) Read Paper called "semiautomated Approximate Bayesian computation" and we follow this attempt by proposing some novel constructions of the relevant statistics with immediate applications to the population genetic problems from which ABC originally stemmed. 
Adaptive strategies for Multilevel Monte Carlo
Keywords: Multilevel Monte Carlo; Adaptive Algorithms
We provide a quick glance into recently developed Adaptive Multilevel Monte Carlo (MLMC) Methods for the following mathematical models: Partial Differential Equations with random inputs, Itô Stochastic Differential Equations and Pure Jump Processes. In this context, the notion of adaptivity includes several aspects such as mesh refinements based on either a priori or a posteriori error estimates, the local choice of different time stepping methods and the selection of the total number of levels and the number of samples at different levels. Our Adaptive MLMC estimator is based on a hierarchy of adaptively refined, non uniform time discretizations, and, as such, it may be considered a generalization of the uniform discretization MLMC method introduced independently by M. Giles and S. Heinrich. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and have the correct complexity with an improved control of the complexity constant factor in the asymptotic analysis [1]. We also propose a novel Bayesian based estimation of parameters needed in MLMC algorithms, taking particular care of the deepest levels, where only few realizations are available to produce the estimates. We present several numerical examples, including some dynamics with singularities and/or non smooth observables, to illustrate the above results and the corresponding computational savings.
