-
Axel Munk
Nanoscale Statistics
Conventional light microscopes have been used for centuries for the study of small length scales down to approximately 250 nm. Images from such a microscope are blurred and noisy, and the measurement error in such images can often be well approximated by Gaussian or Poisson noise. In the past, this approximation has been the focus of a multitude of deconvolution techniques in imaging. However, conventional microscopes have an intrinsic physical limit of resolution. Although this limit remained unchallenged for a century, it was broken for the first time in the 1990s with the advent of modern superresolution fluorescence microscopy techniques. Since then, superresolution fluorescence microscopy has become an indispensable tool for studying the structure and dynamics of living organisms. Current experimental advances go to the physical limits of imaging, where discrete quantum effects are predominant. Consequently, this technique is inherently of a non-Gaussian statistical nature, and we argue that recent technological progress also challenges the long-standing Poisson assumption. Thus, analysis and exploitation of the discrete physical mechanisms of fluorescent molecules and light, as well as their distributions in time and space, have become necessary to achieve the highest resolution possible. We will discuss some modern fluorescence microscopy techniques from a statistical modeling and analysis perspective. Several statistical imaging issues are discussed in more detail, including variational multiscale methods for stimulated emission depletion (STED) microscopy, and drift correction for single marker switching (SMS) microscopy. We address then the issue of providing information on number and size of molecules in a specimen from these techniques with statistical evidence. To this end, we develop a prototypical model for fluorophore dynamics.
We illustrate that such methods benefit from advances in large-scale computing, for example, from recent tools from convex optimization. We argue that in the future, even higher resolutions will require more sophisticated models, and further development of nanoscale statistical evaluation methods that delve further into sub-Poissonian worlds becomes indispensable.
-
Gilles Blanchard
Spectral regularization methods for statistical inverse learning problems
We consider a statistical inverse learning (or inverse regression) problem,
where we observe the image of a function \(f\) through a linear operator \(A\) at
i.i.d. random design points \(X_i\), superposed with an additive noise.
This setting has a long history in statistics: existing references generally
assume observation point designs to be either deterministic regular, or
random with a sampling distribution which is known or comparable to Lebesgue;
and consider regularity of the target function in terms of usual
differentiability properties. More recently, in particular in the statistical
learning literature, emphasis has been put on obtaining distribution-free
results that can apply to a broader class of problems. From this "learning"
point of view, we stress that the sampling distribution of the design points
can be very general and is in particular unknown to the statistician.
We will present some results concerning optimal convergence rates that
extend and complete previously known ones. In particular, we will
consider the optimality, from a statistical point of view, of a general
class of linear spectral methods over regularity classes known as source
conditions; this necessitates to take into account carefully
the discretization error resulting from sampling.
Download the slides
|