Line: 1 to 1  

 
Line: 15 to 15  
Bio: Emmanuel Candès is the BarnumSimons Chair in Mathematics and Statistics, and professor of electrical engineering (by courtesy) at Stanford University. Up until 2009, he was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. His research interests are in applied mathematics, statistics, information theory, signal processing and mathematical optimization with applications to the imaging sciences, scientific computing and inverse problems. Candès graduated from the Ecole Polytechnique in 1993 with a degree in science and engineering, and received his Ph.D. in statistics from Stanford University in 1998. He has given over 60 plenary lectures at major international conferences including the International Congress of Mathematicians (2014). Emmanuel received the 2006 Alan T. Waterman Award from NSF, which recognizes the achievements of earlycareer scientists. Other honors include the 2013 Dannie Heineman Prize presented by the Academy of Sciences at Göttingen, the 2010 George Polya Prize awarded by the Society of Industrial and Applied Mathematics (SIAM), and the 2015 AMSSIAM George David Birkhoff Prize in Applied Mathematics. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences.
Plenary talk 1 – Monday July 6, 09:10  
Changed:  
< <  
> >  
In this talk, I study sparse spikes deconvolution over the space of measures, following several recent works (see for instance [2,3]). For nondegenerate sums of Diracs, we show that, when the signaltonoise ratio is large enough, total variation regularization of measures (which the natural extension of the l1 norm of vectors to the setting of measures) recovers the exact same number of Diracs. We also show that both the locations and the heights of these Diracs converge toward those of the input measure when the noise drops to zero. The exact speed of convergence is governed by a specific dual certificate, which can be computed by solving a linear system. These results extend those obtained by [2]. We also draw connections between the performances of sparse recovery on a continuous domain and on a discretized grid. When the measure is positive, it is known that l1type methods always succeed when there is no noise. We show that exact support recovery is still possible when there is noise. The signaltonoise ratio should then scale like $1/t^{2N1}$ where there are N spikes separated by a distance t. This reflects the intrinsic explosion of the illposedness of the problem [4]. This is joint work with Vincent Duval and Quentin Denoyelle, see [1,4] for more details.
[1] V. Duval, G. Peyré, Exact Support Recovery for Sparse Spikes Deconvolution, to appear in Foundation of Computational Mathematics, 2015.  
Line: 23 to 23  
Bio: Gabriel Peyré graduated from Ecole Normale Supérieure de Cachan, France, in 2003 and received his Ph.D in applied mathematics from école Polytechnique, Paris, France, in 2005. Since 2006, he has been a researcher at the Centre Nationale de Recherche Scienti?que (CNRS), working in Ceremade, University ParisDauphine. He his head of the research group SIGMAVision, which is funded by the European Research Council (ERC). SIGMAVision activity is focussed on sparse and adaptive representations with application in computer vision, computer graphics and neurosciences. Since 2005 Gabriel Peyré has coauthored 40 papers in international journals, 50 conference proceedings in top vision and image processing conferences, and two books. He is the creator of the "Numerical tour of signal processing" (www.numericaltours.com), a popular online repository of Matlab/Scilab resources to teach modern signal and image processing.
Plenary talk 2 – Monday July 6, 13:30  
Changed:  
< <  
> >  
I will first review some motivations for sparsity in the context of probabilistic modelling. I will then describe how some recent ideas from Bayesian nonparametrics can be used for flexible modelling with sparsity assumptions. Finally, I will challenge some strongly held views on sparsity, L1, and optimisation methods, showing experimental results to support my claims.
This talk is based on work with Shakir Mohamed, Katherine Heller, and David Knowles.  
Line: 31 to 31  
Bio: Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads a group of about 30 researchers. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research focuses on nonparametric Bayesian modelling and statistical machine learning. He has also worked on applications to bioinformatics, econometrics, and a variety of largescale data modelling problems. He has published over 200 papers, receiving 25,000 citations (an hindex of 68). His work has been funded by grants and donations from EPSRC, DARPA, Microsoft, Google, Infosys, Facebook, Amazon, FX Concepts and a number of other industrial partners. In 2013, he received a $750,000 Google Award for research on building the Automatic Statistician. He serves on the advisory boards of Opera Solutions and Microsoft Research Cambridge, on the Steering Committee of the Cambridge Big Data Initiative, and in a number of leadership roles as programme and general chair of the leading international conferences in machine learning: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). More information can be found at http://mlg.eng.cam.ac.uk .
Plenary talk 3 – Tuesday July 7, 09:00  
Changed:  
< <  
> >  
Most big realworld networks (social, technological, biological) are sparse. Most of networks have noticeable structure, which can be formed by clusters (communities) and hubs. When and how can a hidden structure be recovered from a sparse network? Known approaches to this problem come from a variety of disciplines — probability, combinatorics, physics, statistics, optmization, information theory, etc. We will focus on the recently developed probabilistic approaches motivated by sparse recovery, where a network is regarded as a random measurement of the hidden structure.
Bio: Roman Vershynin is a Professor of Mathematics working at the University of Michigan. His primary area of expertise is high dimensional probability. He is interested in random geometric structures that appear across mathematics and data science, in particular random matrix theory, geometric functional analysis, convex and discrete geometry, geometric combinatorics, high dimensional statistics, information theory, learning theory, signal processing, numerical analysis, and network science.
Plenary talk 4 – Tuesday July 7, 13:30  
Changed:  
< <  
> >  
Recovering structured objects from linear measurements has been wellstudied in recent years. In many applications in signal processing and machine learning, the object of interest has several structures simultaneously, e.g., a matrix that is simultaneously sparse and lowrank. Often norms that promote each individual structure are known, and allow for recovery using an orderwise optimal number of measurements (e.g., l1 norm for sparsity, nuclear norm for matrix rank), and it is common in practice to minimize a combination of such norms.
In this talk, we examine the problems or recovery and denoising when simultaneous structured are present. First, we show that multiobjective optimization with these norms can do no better, orderwise, than exploiting only one of the present structures. Thus to fully exploit the multiple structures, we need an entirely new convex relaxation, not one that combines the convex relaxations for each structure. We then specialize this to the case of sparse and lowrank matrices, and show that a nonconvex formulation can recover the model from very few measurements (on the order of the degrees of freedom of the matrix), whereas the convex problem obtained by combining the l1 and nuclear norms requires many more measurements. Our framework applies to arbitrary norms as well as to a wide range of measurement ensembles beyond Gaussian. This result allows us to give samplecomplexity bounds for problems such as sparse phase retrieval and lowrank tensor completion.  
Line: 55 to 55  
Bio: Michael (Miki) Lustig is an Associate Professor in the department of Electrical Engineering and Computer Sciences at UC Berkeley. He joined the faculty in spring 2010. He received his B.Sc. in Electrical Engineering from the Technion, Israel Institute of Technology in 2002. He received his M.Sc. and Ph.D. in Electrical Engineering from Stanford University in 2004 and 2008, respectively. His research focuses on medical imaging, particularly Magnetic Resonance Imaging (MRI), and very specifically, the application of compressed sensing to rapid and highresolution MRI, MRI pulse sequence design, medical image reconstruction, inverse problems in medical imaging and sparse signal representation.
Plenary talk 6 – Wednesday July 8, 13:30  
Deleted:  
< < 
 
In the first part of this talk, we will overview recent results on convex relaxations for solving systems of bilinear equations. We will show how certain problems that are prevalent in imaging (blind deconvolution, autocalibration) can be recast as low rank recovery problems, and discuss the theoretical conditions under which these problems can be practically solved. We will show how this theory applies to problems including coil calibration in MRI, blind deblurring in coded imaging, and passive acoustic imaging. We will also discuss recent results on estimating matrices that are simultaneously sparse a low rank, and their applications to coded imaging and phase retrieval.  
Line: 72 to 71  
Bio: Venkat Chandrasekaran is an Assistant Professor at Caltech in Computing and Mathematical Sciences and in Electrical Engineering. He received a Ph.D. in Electrical Engineering and Computer Science in June 2011 from MIT, and he received a B.A. in Mathematics as well as a B.S. in Electrical and Computer Engineering in May 2005 from Rice University. He was awarded the JinAu Kong Dissertation Prize for the best doctoral thesis in Electrical Engineering at MIT (2012), the Young Researcher Prize in Continuous Optimization at the Fourth International Conference on Continuous Optimization of the Mathematical Optimization Society (2013, awarded once every three years), an Okawa Research Grant in Information and Telecommunications (2013), and an NSF CAREER award (2014). His research interests lie in mathematical optimization and its application to the information sciences.
Plenary talk 8 – Thursday July 9, 13:50  
Changed:  
< <  
> >  
Sketches and other randomized projection schemes are generic tools for dimensionality reduction. In this talk, we discuss some recent developments in the use of sketching for obtaining fast but approximate solutions to various types of highdimensional convex programs, ranging from relatively simple leastsquares problems to more complex semidefinite programs. In the context of leastsquares programs, we begin by proving that the standard version of leastsquares sketching is highly suboptimal for solution approximation. Motivated by this deficiency, we propose a novel scheme known as the iterative Hessian sketch, and provide sharp bounds on its solution error. This scheme also has a generalization to arbitrary twicedifferentiable objectives, leading to a randomized approximation known as the Newton sketch.
Based on joint work with Mert Pilanci and Yun Yang, UC Berkeley Bio: Martin Wainwright is currently a professor at University of California at Berkeley, with a joint appointment between the Department of Statistics and the Department of Electrical Engineering and Computer Sciences. His research interests include highdimensional statistics, machine learning, information theory and optimization theory. He has been awarded an Alfred P. Sloan Foundation Fellowship (2005), a joint recipient of Best Paper Awards from the IEEE Signal Processing Society (2008), IEEE Communication Society (2010), and Information Theory and Communication Societies (2012); a Medallion Lecturer (2013) from the Institute of Mathematical Statistics; a Section Lecturer (2014) at the International Congress of Mathematicians; and the COPSS Presidents' Award (2014) from the Joint Statistical Societies.  
Added:  
> > 

Line: 1 to 1  

 
Line: 24 to 24  
Plenary talk 2 – Monday July 6, 13:30  
Changed:  
< <  Abstract – TBA  
> >  I will first review some motivations for sparsity in the context of probabilistic modelling. I will then describe how some recent ideas from Bayesian nonparametrics can be used for flexible modelling with sparsity assumptions. Finally, I will challenge some strongly held views on sparsity, L1, and optimisation methods, showing experimental results to support my claims.  
Changed:  
< <  Bio:  
> > 
This talk is based on work with Shakir Mohamed, Katherine Heller, and David Knowles.
Bio: Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, where he leads a group of about 30 researchers. He studied computer science and cognitive science at the University of Pennsylvania, obtained his PhD from MIT in 1995, and was a postdoctoral fellow at the University of Toronto. His academic career includes concurrent appointments as one of the founding members of the Gatsby Computational Neuroscience Unit in London, and as a faculty member of CMU's Machine Learning Department for over 10 years. His current research focuses on nonparametric Bayesian modelling and statistical machine learning. He has also worked on applications to bioinformatics, econometrics, and a variety of largescale data modelling problems. He has published over 200 papers, receiving 25,000 citations (an hindex of 68). His work has been funded by grants and donations from EPSRC, DARPA, Microsoft, Google, Infosys, Facebook, Amazon, FX Concepts and a number of other industrial partners. In 2013, he received a $750,000 Google Award for research on building the Automatic Statistician. He serves on the advisory boards of Opera Solutions and Microsoft Research Cambridge, on the Steering Committee of the Cambridge Big Data Initiative, and in a number of leadership roles as programme and general chair of the leading international conferences in machine learning: AISTATS (2005), ICML (2007, 2011), and NIPS (2013, 2014). More information can be found at http://mlg.eng.cam.ac.uk .  
Plenary talk 3 – Tuesday July 7, 09:00 
Line: 1 to 1  

 
Changed:  
< < 
Plenary Talks  
> >  
Changed:  
< < 
We are delighted that the following highly respected researchers have agreed to give plenary talks at SPARS 2015:
 
> > 
SPARS 2015 – Special Lecture and Plenary TalksWe are delighted that the following highly respected researchers have agreed to give the Special Lecture and Plenary Talks, as follows, at SPARS 2015.
Special Lecture – Tuesday July 7, 16:00

Line: 1 to 1  

Plenary Talks  
Changed:  
< < 
 
> > 
We are delighted that the following highly respected researchers have agreed to give plenary talks at SPARS 2015:
