Yannick Baraud: From robust tests to robust Bayes-like posterior distributions
We address the problem of estimating the distribution of presumed i.i.d. observations within the framework of Bayesian statistics. To do this, we consider a statistical model for the distribution of the data as well as a prior on it and we propose a new posterior distribution that shares some similarities with the classical Bayesian one. In particular, when the statistical model is exact, we show that this new posterior distribution concentrates its mass around the target distribution, just as the classical Bayesian posterior would do under appropriate assumptions. Nevertheless, we establish that this concentration property holds under weaker assumptions than those generally required for the classical Bayesian posterior. Specifically, we do not require that the prior distribution allocates sufficient mass on Kullback-Leibler neighbourhoods but only on the larger Hellinger ones. More importantly, unlike the classical Bayesian distribution, ours proves to be robust against a potential misspecification of the prior and the assumptions we started from. We prove that the concentration properties we establish remain stable when the equidistribution assumption is violated or when the data are i.i.d. with a distribution that does not belong to our model but only lies close enough to it. The results we obtain are non-asymptotic, involve explicit numerical constants and are based on the interplay between information theory and robust testing.
Siem Jan Koopman: Nonlinear non-Gaussian state space models: from extended Kalman filter, importance sampling and particle filtering, towards the extremum Monte Carlo method.
The Kalman filter is often regarded as the workhorse for solving intricate issues in time series modelling and analysis, including prediction and forecasting, and especially when time series are subject to irregularities such as missing observations. When the stochastic time series process can be represented as a linear Gaussian state space model (also known as a dynamic linear model), the use of the Kalman filter can be justified as having optimal properties, including its delivery of minimum mean squared linear estimators. However, in many cases of practical interest we need to depart from a linear Gaussian process and may want to consider a nonlinear non-Gaussian state space model to represent the time series process. The filtering of a time series in such a setting has relied for many decades on the extended Kalman filter, a method that is based on an approximating first-order Taylor expansion of the nonlinear parts. With the advance of computing power, simulation-based methods have emerged and have received much attention as they often lead to exact (optimal) solutions which are only subjected to Monte Carlo error. In the context of state space models, importance sampling methods have been explored but it has been their sequential use over time that has led to the development of a powerful class of filtering methods, known as particle filtering. In this presentation, these developments are reviewed and discussed in some detail. In addition, an introduction is given to a new general method of filtering that can incorporate the latest developments in machine learning in a natural manner. The details of the method and its statistical properties are discussed.
Joint work with Francisco Blasques and Karim Moussa (VU Amsterdam)
|