- SPARS 2015 – Special Lecture and Plenary Talks
- Special Lecture – Tuesday July 7, 16:00
Emmanuel J. Candes
Modern Optimization meets Physics: Recent Progress on the Phase Retrieval Problem - Plenary talk 1 – Monday July 6, 09:10
Gabriel Peyré, CNRS and Université Paris-Dauphine, France
Exact Support Recovery for Sparse Spikes Deconvolution - Plenary talk 2 – Monday July 6, 13:30
Zoubin Ghahramani, University of Cambridge, UK.
Sparsity? A Bayesian view - Plenary talk 3 – Tuesday July 7, 09:00
Roman Vershynin, University of Michigan, USA
Recovering the hidden structure of sparse networks - Plenary talk 4 – Tuesday July 7, 13:30
Maryam Fazel, University of Washington, USA
Recovery and denoising with simultaneous structures - Plenary talk 5 – Wednesday July 8, 09:00
Michael Lustig, UC Berkeley, USA
Sparse MRI since SPARS '05 - Plenary talk 6 – Wednesday July 8, 13:30
Justin Romberg, Georgia Institute of Technology, USA
Structured recovery for imaging and image processing - Plenary talk 7 – Thursday July 9, 09:00
Venkat Chandrasekaran, Caltech, USA
Graphical Models, Latent Variables, and Sufficient Dimension Reduction - Plenary talk 8 – Thursday July 9, 13:50
Martin Wainwright, UC Berkeley, USA
Randomized algorithms for high-dimensional optimization: Statistical and computational guarantees
- Special Lecture – Tuesday July 7, 16:00
SPARS 2015 – Special Lecture and Plenary Talks
We are delighted that the following highly respected researchers have agreed to give the Special Lecture and Plenary Talks, as follows, at SPARS 2015. Special Lecture – Tuesday July 7, 16:00
Emmanuel J. Candes
Modern Optimization meets Physics: Recent Progress on the Phase Retrieval Problem
In many imaging problems such as X-ray crystallography, detectors can only record the intensity or magnitude of a diffracted wave as opposed to measuring its phase. This means that we need to solve quadratic equations — this is notoriously NP hard — as opposed to linear ones. While we present recently introduced effective convex relaxations to the phase retrieval problem, the focus is on a class of novel non-convex algorithms, which can be provably exact. This class of algorithms, dubbed Wirtinger flows, finds the solution to randomized quadratic systems from a number of equations (samples) and flops that are both optimal. At a high level, the algorithm can be interpreted as a sort of stochastic gradient scheme, starting from a guess obtained by means of a spectral method. We demonstrate that the Wirtinger flow reconstruction degrades gracefully as the signal-to-noise ratio decreases. The empirical performance shall be demonstrated on phase retrieval problems, which is at the center of spectacular current research efforts collectively known under the name of coherent diffraction imaging aimed, among other things, at determining the 3D structure of large protein complexes.
This is joint work with Xiaodong Li, Mahdi Soltanolkotabi and Yuxin Chen.
Bio: Emmanuel Candès is the Barnum-Simons Chair in Mathematics and Statistics, and professor of electrical engineering (by courtesy) at Stanford University. Up until 2009, he was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. His research interests are in applied mathematics, statistics, information theory, signal processing and mathematical optimization with applications to the imaging sciences, scientific computing and inverse problems. Candès graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received his Ph.D. in statistics from Stanford University in 1998. He has given over 60 plenary lectures at major international conferences including the International Congress of Mathematicians (2014). Emmanuel received the 2006 Alan T. Waterman Award from NSF, which recognizes the achievements of early-career scientists. Other honors include the 2013 Dannie Heineman Prize presented by the Academy of Sciences at Göttingen, the 2010 George Polya Prize awarded by the Society of Industrial and Applied Mathematics (SIAM), and the 2015 AMS-SIAM George David Birkhoff Prize in Applied Mathematics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.
Plenary talk 1 – Monday July 6, 09:10
Gabriel Peyré, CNRS and Université Paris-Dauphine, France
Exact Support Recovery for Sparse Spikes Deconvolution
In this talk, I study sparse spikes deconvolution over the space of measures, following several recent works (see for instance [2,3]). For non-degenerate sums of Diracs, we show that, when the signal-to-noise ratio is large enough, total variation regularization of measures (which the natural extension of the l1 norm of vectors to the setting of measures) recovers the exact same number of Diracs. We also show that both the locations and the heights of these Diracs converge toward those of the input measure when the noise drops to zero. The exact speed of convergence is governed by a specific dual certificate, which can be computed by solving a linear system. These results extend those obtained by [2]. We also draw connections between the performances of sparse recovery on a continuous domain and on a discretized grid. When the measure is positive, it is known that l1-type methods always succeed when there is no noise. We show that exact support recovery is still possible when there is noise. The signal-to-noise ratio should then scale like $1/t^{2N-1}$ where there are N spikes separated by a distance t. This reflects the intrinsic explosion of the ill-posedness of the problem [4]. This is joint work with Vincent Duval and Quentin Denoyelle, see [1,4] for more details.
[1] V. Duval, G. Peyré, Exact Support Recovery for Sparse Spikes Deconvolution, to appear in Foundation of Computational Mathematics, 2015.[2] E. J. Candès and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. Communications on Pure and Applied Mathematics, 67(6), 906-956, 2013.
[3] K. Bredies and H.K. Pikkarainen. Inverse problems in spaces of measures. ESAIM: Control, Optimisation and Calculus of Variations, 19:190-218, 2013.
[4] Q. Denoyelle, V. Duval and G. Peyré, Asymptotic of Sparse Recovery for Positive Measures, preprint HAL 2015. Bio: Gabriel Peyré graduated from Ecole Normale Supérieure de Cachan, France, in 2003 and received his Ph.D in applied mathematics from école Polytechnique, Paris, France, in 2005. Since 2006, he has been a researcher at the Centre Nationale de Recherche Scienti?que (CNRS), working in Ceremade, University Paris-Dauphine. He his head of the research group SIGMA-Vision, which is funded by the European Research Council (ERC). SIGMA-Vision activity is focussed on sparse and adaptive representations with application in computer vision, computer graphics and neurosciences. Since 2005 Gabriel Peyré has co-authored 40 papers in international journals, 50 conference proceedings in top vision and image processing conferences, and two books. He is the creator of the "Numerical tour of signal processing" (www.numerical-tours.com), a popular online repository of Matlab/Scilab resources to teach modern signal and image processing.
Plenary talk 2 – Monday July 6, 13:30
Zoubin Ghahramani, University of Cambridge, UK.
Sparsity? A Bayesian view
I will first review some motivations for sparsity in the context of probabilistic modelling. I will then describe how some recent ideas from Bayesian nonparametrics can be used for flexible modelling with sparsity assumptions. Finally, I will challenge some strongly held views on sparsity, L1, and optimisation methods, showing experimental results to support my claims.
This talk is based on work with Shakir Mohamed, Katherine Heller, and David Knowles.
Bio: Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads a group of about 30 researchers. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research focuses on nonparametric Bayesian modelling and statistical machine learning. He has also worked on applications to bioinformatics, econometrics, and a variety of large-scale data modelling problems. He has published over 200 papers, receiving 25,000 citations (an h-index of 68). His work has been funded by grants and donations from EPSRC, DARPA, Microsoft, Google, Infosys, Facebook, Amazon, FX Concepts and a number of other industrial partners. In 2013, he received a $750,000 Google Award for research on building the Automatic Statistician. He serves on the advisory boards of Opera Solutions and Microsoft Research Cambridge, on the Steering Committee of the Cambridge Big Data Initiative, and in a number of leadership roles as programme and general chair of the leading international conferences in machine learning: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). More information can be found at http://mlg.eng.cam.ac.uk .
Plenary talk 3 – Tuesday July 7, 09:00
Roman Vershynin, University of Michigan, USA
Recovering the hidden structure of sparse networks
Most big real-world networks (social, technological, biological) are sparse. Most of networks have noticeable structure, which can be formed by clusters (communities) and hubs. When and how can a hidden structure be recovered from a sparse network? Known approaches to this problem come from a variety of disciplines — probability, combinatorics, physics, statistics, optmization, information theory, etc. We will focus on the recently developed probabilistic approaches motivated by sparse recovery, where a network is regarded as a random measurement of the hidden structure.
Bio: Roman Vershynin is a Professor of Mathematics working at the University of Michigan. His primary area of expertise is high dimensional probability. He is interested in random geometric structures that appear across mathematics and data science, in particular random matrix theory, geometric functional analysis, convex and discrete geometry, geometric combinatorics, high dimensional statistics, information theory, learning theory, signal processing, numerical analysis, and network science.Roman Vershynin received an equivalent of M.S. from Kharkiv National University in Ukraine in 1996, and Ph.D. from University of Missouri-Columbia in 2000. Prior to his appointment at the University of Michigan, he was a faculty at the University of California, Davis (2003-2008) and a postdoctoral fellow at the University of Alberta in Canada (2001-2003) and Weizmann Institute of Science in Israel (2000-2001). He has served on the Editorial Board of the Journal of Fourier Analysis and Applications. He was a recipient of the Alfred Sloan Research Fellowship in 2005, Bessel Researh Award from Humboldt Foundation in 2013, an invited speaker at the International Congress of Mathematicians in 2010, and a plenary speaker at the Summer Meeting of the Canadian Mathematical Society in 2011.