SPARS 2015

Signal Processing with Adaptive Sparse Structured Representations

July 6-9, 2015, Cambridge, UK.

SPARS 2015 – Special Lecture and Plenary Talks

We are delighted that the following highly respected researchers have agreed to give the Special Lecture and Plenary Talks, as follows, at SPARS 2015.

Special Lecture – Tuesday July 7, 16:00
Emmanuel J. Candes
Modern Optimization meets Physics: Recent Progress on the Phase Retrieval Problem

In many imaging problems such as X-ray crystallography, detectors can only record the intensity or magnitude of a diffracted wave as opposed to measuring its phase. This means that we need to solve quadratic equations — this is notoriously NP hard — as opposed to linear ones. While we present recently introduced effective convex relaxations to the phase retrieval problem, the focus is on a class of novel non-convex algorithms, which can be provably exact. This class of algorithms, dubbed Wirtinger flows, finds the solution to randomized quadratic systems from a number of equations (samples) and flops that are both optimal. At a high level, the algorithm can be interpreted as a sort of stochastic gradient scheme, starting from a guess obtained by means of a spectral method. We demonstrate that the Wirtinger flow reconstruction degrades gracefully as the signal-to-noise ratio decreases. The empirical performance shall be demonstrated on phase retrieval problems, which is at the center of spectacular current research efforts collectively known under the name of coherent diffraction imaging aimed, among other things, at determining the 3D structure of large protein complexes.

This is joint work with Xiaodong Li, Mahdi Soltanolkotabi and Yuxin Chen.

Bio: Emmanuel Candès is the Barnum-Simons Chair in Mathematics and Statistics, and professor of electrical engineering (by courtesy) at Stanford University. Up until 2009, he was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. His research interests are in applied mathematics, statistics, information theory, signal processing and mathematical optimization with applications to the imaging sciences, scientific computing and inverse problems. Candès graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received his Ph.D. in statistics from Stanford University in 1998. He has given over 60 plenary lectures at major international conferences including the International Congress of Mathematicians (2014). Emmanuel received the 2006 Alan T. Waterman Award from NSF, which recognizes the achievements of early-career scientists. Other honors include the 2013 Dannie Heineman Prize presented by the Academy of Sciences at Göttingen, the 2010 George Polya Prize awarded by the Society of Industrial and Applied Mathematics (SIAM), and the 2015 AMS-SIAM George David Birkhoff Prize in Applied Mathematics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.

Plenary talk 1 – Monday July 6, 09:10
Gabriel Peyré, CNRS and Université Paris-Dauphine, France
Exact Support Recovery for Sparse Spikes Deconvolution

In this talk, I study sparse spikes deconvolution over the space of measures, following several recent works (see for instance [2,3]). For non-degenerate sums of Diracs, we show that, when the signal-to-noise ratio is large enough, total variation regularization of measures (which the natural extension of the l1 norm of vectors to the setting of measures) recovers the exact same number of Diracs. We also show that both the locations and the heights of these Diracs converge toward those of the input measure when the noise drops to zero. The exact speed of convergence is governed by a specific dual certificate, which can be computed by solving a linear system. These results extend those obtained by [2]. We also draw connections between the performances of sparse recovery on a continuous domain and on a discretized grid. When the measure is positive, it is known that l1-type methods always succeed when there is no noise. We show that exact support recovery is still possible when there is noise. The signal-to-noise ratio should then scale like $1/t^{2N-1}$ where there are N spikes separated by a distance t. This reflects the intrinsic explosion of the ill-posedness of the problem [4]. This is joint work with Vincent Duval and Quentin Denoyelle, see [1,4] for more details.

[1] V. Duval, G. Peyré, Exact Support Recovery for Sparse Spikes Deconvolution, to appear in Foundation of Computational Mathematics, 2015.
[2] E. J. Candès and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. Communications on Pure and Applied Mathematics, 67(6), 906-956, 2013.
[3] K. Bredies and H.K. Pikkarainen. Inverse problems in spaces of measures. ESAIM: Control, Optimisation and Calculus of Variations, 19:190-218, 2013.
[4] Q. Denoyelle, V. Duval and G. Peyré, Asymptotic of Sparse Recovery for Positive Measures, preprint HAL 2015.

Bio: Gabriel Peyré graduated from Ecole Normale Supérieure de Cachan, France, in 2003 and received his Ph.D in applied mathematics from école Polytechnique, Paris, France, in 2005. Since 2006, he has been a researcher at the Centre Nationale de Recherche Scienti?que (CNRS), working in Ceremade, University Paris-Dauphine. He his head of the research group SIGMA-Vision, which is funded by the European Research Council (ERC). SIGMA-Vision activity is focussed on sparse and adaptive representations with application in computer vision, computer graphics and neurosciences. Since 2005 Gabriel Peyré has co-authored 40 papers in international journals, 50 conference proceedings in top vision and image processing conferences, and two books. He is the creator of the "Numerical tour of signal processing" (www.numerical-tours.com), a popular online repository of Matlab/Scilab resources to teach modern signal and image processing.

Plenary talk 2 – Monday July 6, 13:30
Zoubin Ghahramani, University of Cambridge, UK.
Sparsity? A Bayesian view

I will first review some motivations for sparsity in the context of probabilistic modelling. I will then describe how some recent ideas from Bayesian nonparametrics can be used for flexible modelling with sparsity assumptions. Finally, I will challenge some strongly held views on sparsity, L1, and optimisation methods, showing experimental results to support my claims.

This talk is based on work with Shakir Mohamed, Katherine Heller, and David Knowles.

Bio: Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads a group of about 30 researchers. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research focuses on nonparametric Bayesian modelling and statistical machine learning. He has also worked on applications to bioinformatics, econometrics, and a variety of large-scale data modelling problems. He has published over 200 papers, receiving 25,000 citations (an h-index of 68). His work has been funded by grants and donations from EPSRC, DARPA, Microsoft, Google, Infosys, Facebook, Amazon, FX Concepts and a number of other industrial partners. In 2013, he received a $750,000 Google Award for research on building the Automatic Statistician. He serves on the advisory boards of Opera Solutions and Microsoft Research Cambridge, on the Steering Committee of the Cambridge Big Data Initiative, and in a number of leadership roles as programme and general chair of the leading international conferences in machine learning: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). More information can be found at http://mlg.eng.cam.ac.uk .

Plenary talk 3 – Tuesday July 7, 09:00
Roman Vershynin, University of Michigan, USA
Recovering the hidden structure of sparse networks

Most big real-world networks (social, technological, biological) are sparse. Most of networks have noticeable structure, which can be formed by clusters (communities) and hubs. When and how can a hidden structure be recovered from a sparse network? Known approaches to this problem come from a variety of disciplines — probability, combinatorics, physics, statistics, optmization, information theory, etc. We will focus on the recently developed probabilistic approaches motivated by sparse recovery, where a network is regarded as a random measurement of the hidden structure.

Bio: Roman Vershynin is a Professor of Mathematics working at the University of Michigan. His primary area of expertise is high dimensional probability. He is interested in random geometric structures that appear across mathematics and data science, in particular random matrix theory, geometric functional analysis, convex and discrete geometry, geometric combinatorics, high dimensional statistics, information theory, learning theory, signal processing, numerical analysis, and network science.
Roman Vershynin received an equivalent of M.S. from Kharkiv National University in Ukraine in 1996, and Ph.D. from University of Missouri-Columbia in 2000. Prior to his appointment at the University of Michigan, he was a faculty at the University of California, Davis (2003-2008) and a postdoctoral fellow at the University of Alberta in Canada (2001-2003) and Weizmann Institute of Science in Israel (2000-2001). He has served on the Editorial Board of the Journal of Fourier Analysis and Applications. He was a recipient of the Alfred Sloan Research Fellowship in 2005, Bessel Researh Award from Humboldt Foundation in 2013, an invited speaker at the International Congress of Mathematicians in 2010, and a plenary speaker at the Summer Meeting of the Canadian Mathematical Society in 2011.

Plenary talk 4 – Tuesday July 7, 13:30
Maryam Fazel, University of Washington, USA
Recovery and denoising with simultaneous structures

Recovering structured objects from linear measurements has been well-studied in recent years. In many applications in signal processing and machine learning, the object of interest has several structures simultaneously, e.g., a matrix that is simultaneously sparse and low-rank. Often norms that promote each individual structure are known, and allow for recovery using an order-wise optimal number of measurements (e.g., l1 norm for sparsity, nuclear norm for matrix rank), and it is common in practice to minimize a combination of such norms.

In this talk, we examine the problems or recovery and denoising when simultaneous structured are present. First, we show that multi-objective optimization with these norms can do no better, order-wise, than exploiting only one of the present structures. Thus to fully exploit the multiple structures, we need an entirely new convex relaxation, not one that combines the convex relaxations for each structure. We then specialize this to the case of sparse and low-rank matrices, and show that a nonconvex formulation can recover the model from very few measurements (on the order of the degrees of freedom of the matrix), whereas the convex problem obtained by combining the l1 and nuclear norms requires many more measurements. Our framework applies to arbitrary norms as well as to a wide range of measurement ensembles beyond Gaussian. This result allows us to give sample-complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion.

This talk is based on joint work with Samet Oymak, Amin Jalali, Yonina Eldar, and Babak Hassibi.

Bio: Maryam Fazel is an Associate Professor of Electrical Engineering at the University of Washington, with adjunct appointments in Computer Science and Engineering, Mathematics, and Statistics. Maryam received her MS and PhD in EE from Stanford University, her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech prior to joining UW. Her research interests are broad, centered at mathematical optimization, including convex optimization theory and algorithms, low-rank matrix methods, and applications in machine learning, signal processing, and control systems. She is a recipient of the NSF Career Award (2009), the UW EE Outstanding Teaching Award (2009), and the UAI (Uncertainty in Artificial Intelligence) conference Best Student Paper Award (2014), and coauthored a paper selected as a Fast-Breaking paper by Science Watch (2011) on the problem of low-rank matrix recovery.

Plenary talk 5 – Wednesday July 8, 09:00
Michael Lustig, UC Berkeley, USA
Sparse MRI since SPARS '05

MRI is en excellent imaging modality. However it suffers from slow data acquisition rates. Since its invention more than 30 years ago improvements in hardware and imaging techniques have enabled faster data collection. However, we are currently at the point where fundamental physical and physiological effects limit our ability to simply encode data more quickly. This fundamental limit has led many researchers to look for methods to reduce the amount of acquired data without degrading the image quality. Ten years ago at SPARS '05 we first presented initial results using compressed sensing in MRI. Since then, there has been significant effort by many groups to exploit sparsity and other models for reducing the acquisition time as well as resolving dynamics.

The purpose of the talk is to report what we have been able to achieve by using compressed sensing in clinical practice since then, the current status of the technology and where it is going. In particular we will focus on the benefits for paediatric patients. This work has been a collaboration between UC Berkeley EECS department, Stanford University EE, Statistics and Radiology departments, Lucile Packard Children's Hospital and GE Healthcare.

Bio: Michael (Miki) Lustig is an Associate Professor in the department of Electrical Engineering and Computer Sciences at UC Berkeley. He joined the faculty in spring 2010. He received his B.Sc. in Electrical Engineering from the Technion, Israel Institute of Technology in 2002. He received his M.Sc. and Ph.D. in Electrical Engineering from Stanford University in 2004 and 2008, respectively. His research focuses on medical imaging, particularly Magnetic Resonance Imaging (MRI), and very specifically, the application of compressed sensing to rapid and high-resolution MRI, MRI pulse sequence design, medical image reconstruction, inverse problems in medical imaging and sparse signal representation.

Plenary talk 6 – Wednesday July 8, 13:30
Justin Romberg, Georgia Institute of Technology, USA
Structured recovery for imaging and image processing

In the first part of this talk, we will overview recent results on convex relaxations for solving systems of bilinear equations. We will show how certain problems that are prevalent in imaging (blind deconvolution, auto-calibration) can be recast as low rank recovery problems, and discuss the theoretical conditions under which these problems can be practically solved. We will show how this theory applies to problems including coil calibration in MRI, blind deblurring in coded imaging, and passive acoustic imaging. We will also discuss recent results on estimating matrices that are simultaneously sparse a low rank, and their applications to coded imaging and phase retrieval.

In the second part of the talk, we will discuss a novel application of sparse recovery to a general problem in computer vision: locating geometrically regular objects in an image in which they are overlapping and visually occluding one another. We will show how the geometric process of combining shapes through unions and occlusions can be approached by building up a function out of atoms from a dictionary. This allows us to regularize standard variational techniques for image segmentation (i.e. Chan-Vese), and leads to a natural convex relaxation of both the energy functional and the combinatorial shape constraints. We will present some theoretical conditional under which this convex relaxation if provably effective.

Bio: Justin Romberg is an Associate Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. Dr. Romberg received the B.S.E.E. (1997), M.S. (1999) and Ph.D. (2004) degrees from Rice University in Houston, Texas. From Fall 2003 until Fall 2006, he was a Postdoctoral Scholar in Applied and Computational Mathematics at the California Institute of Technology. He spent the Summer of 2000 as a researcher at Xerox PARC, the Fall of 2003 as a visitor at the Laboratoire Jacques-Louis Lions in Paris, and the Fall of 2004 as a Fellow at UCLA's Institute for Pure and Applied Mathematics. In the Fall of 2006, he joined the Georgia Tech ECE faculty. In 2008 he received an ONR Young Investigator Award, in 2009 he received a PECASE award and a Packard Fellowship, and in 2010 he was named a Rice University Outstanding Young Engineering Alumnus. In 2006-2007 he was a consultant for the TV show "Numb3rs" and from 2008-2011, he was an Associate Editor for the IEEE Transactions on Information Theory. He is currently on the editorial board for the SIAM Journal on Imaging Science.

Plenary talk 7 – Thursday July 9, 09:00
Venkat Chandrasekaran, Caltech, USA
Graphical Models, Latent Variables, and Sufficient Dimension Reduction

Graphical models are statistical models defined on graphs and they offer a compact representation of the conditional dependencies underlying a large number of random variables. An outstanding challenge in many application domains is to identify the graphical model structure underlying a collection of variables. However, this task is significantly complicated by the effect of latent or unobserved phenomena, which can induce complicated confounding interactions among the observed variables. In this talk we present a computationally tractable approach based on convex optimization to address this challenge in settings with Gaussian variables. We describe extensions of this work to incorporate sufficient dimension reduction methods to attempt to obtain semantic information about the latent variables. We also discuss preliminary investigations in settings with non-Gaussian variables. Finally, we demonstrate the utility of our methodology in applications.

Joint work with Armeen Taeb, Parikshit Shah, Pablo Parrilo, and Alan Willsky.

Bio: Venkat Chandrasekaran is an Assistant Professor at Caltech in Computing and Mathematical Sciences and in Electrical Engineering. He received a Ph.D. in Electrical Engineering and Computer Science in June 2011 from MIT, and he received a B.A. in Mathematics as well as a B.S. in Electrical and Computer Engineering in May 2005 from Rice University. He was awarded the Jin-Au Kong Dissertation Prize for the best doctoral thesis in Electrical Engineering at MIT (2012), the Young Researcher Prize in Continuous Optimization at the Fourth International Conference on Continuous Optimization of the Mathematical Optimization Society (2013, awarded once every three years), an Okawa Research Grant in Information and Telecommunications (2013), and an NSF CAREER award (2014). His research interests lie in mathematical optimization and its application to the information sciences.

Plenary talk 8 – Thursday July 9, 13:50
Martin Wainwright, UC Berkeley, USA
Randomized algorithms for high-dimensional optimization: Statistical and computational guarantees

Sketches and other randomized projection schemes are generic tools for dimensionality reduction. In this talk, we discuss some recent developments in the use of sketching for obtaining fast but approximate solutions to various types of high-dimensional convex programs, ranging from relatively simple least-squares problems to more complex semidefinite programs. In the context of least-squares programs, we begin by proving that the standard version of least-squares sketching is highly suboptimal for solution approximation. Motivated by this deficiency, we propose a novel scheme known as the iterative Hessian sketch, and provide sharp bounds on its solution error. This scheme also has a generalization to arbitrary twice-differentiable objectives, leading to a randomized approximation known as the Newton sketch.

Based on joint work with Mert Pilanci and Yun Yang, UC Berkeley

Bio: Martin Wainwright is currently a professor at University of California at Berkeley, with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences. His research interests include high-dimensional statistics, machine learning, information theory and optimization theory. He has been awarded an Alfred P. Sloan Foundation Fellowship (2005), a joint recipient of Best Paper Awards from the IEEE Signal Processing Society (2008), IEEE Communication Society (2010), and Information Theory and Communication Societies (2012); a Medallion Lecturer (2013) from the Institute of Mathematical Statistics; a Section Lecturer (2014) at the International Congress of Mathematicians; and the COPSS Presidents' Award (2014) from the Joint Statistical Societies.

Topic attachments
I Attachment Action Size Date Who Comment
Plenary1_Peyre.pdfpdf Plenary1_Peyre.pdf manage 14126.7 K 01 Sep 2015 - 16:04 UnknownUser  
Plenary2_Zoubin_talk.pdfpdf Plenary2_Zoubin_talk.pdf manage 926.6 K 01 Sep 2015 - 16:11 UnknownUser  
Plenary3_Vershynin_2015-Cambridge-SPARS.pdfpdf Plenary3_Vershynin_2015-Cambridge-SPARS.pdf manage 17507.7 K 01 Sep 2015 - 16:12 UnknownUser  
Plenary4_SPARS2015_Fazel.pdfpdf Plenary4_SPARS2015_Fazel.pdf manage 302.6 K 01 Sep 2015 - 16:12 UnknownUser  
Plenary6_Romberg_spars-talk.pdfpdf Plenary6_Romberg_spars-talk.pdf manage 12217.2 K 02 Sep 2015 - 05:38 UnknownUser  
Plenary8_Wainwright_SPARS15.pdfpdf Plenary8_Wainwright_SPARS15.pdf manage 497.4 K 01 Sep 2015 - 16:22 UnknownUser