ITW 2016

IEEE Information Theory Workshop

2016, Cambridge, UK

Plenary Talks

The plenary talks are:

Analog to Digital Compression

Yonina Eldar (Technion - Israel Institute of Technology)

In recent years there has been an explosion of work on exploiting sparsity in order to reduce sampling rates in a wide-range of applications. In this talk, we consider several examples in which analog to digital compression is possible without assuming any structure on the signal being sampled. This is possible due to the fact that we are not interested in direct recovery of the signal itself, but rather of some function of the signal. We begin by considering analog to digital compression when quantization is explicitly taken into account. In particular, we develop a rate-distortion theory for reconstructing a stationary continuous-time signal under a constraint on the number of bits per sample. We then analyze the optimal sampling rates required to achieve minimal distortion and show that generally they are below the Nyquist rate. Next, we consider sampling a signal when we are interested in recovering its power spectrum. Finally, we treat sampling of ultrasound signals where the goal is to create a beamformed image from the given samples. In all cases we demonstrate that sampling at rates much lower than the Nyquist rate are possible, despite the fact that no structure is assumed on the input signal.


Yonina C. Eldar received a B.Sc. in Physics in 1995 and a B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), Tel-Aviv, Israel, and the Ph.D. degree in Electrical Engineering and Computer Science in 2002 from MIT. She is currently a Professor in the Department of Electrical Engineering at the Technion - Israel Institute of Technology, Haifa, Israel, where she holds the Edwards Chair in Engineering. She is also a Research Affiliate with the Research Laboratory of Electronics at MIT and was a Visiting Professor at Stanford University, Stanford, CA.

Dr. Eldar has received many awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award (2013), the IEEE/AESS Fred Nathanson Memorial Radar Award (2014), and the IEEE Kiyo Tomiyasu Award (2016). She was a Horev Fellow of the Leaders in Science and Technology program at the Technion and an Alon Fellow. She received the Michael Bruno Memorial Award from the Rothschild Foundation, the Weizmann Prize for Exact Sciences, the Wolf Foundation Krill Prize for Excellence in Scientific Research, the Henry Taub Prize for Excellence in Research (twice), the Hershel Rich Innovation Award (three times), the Award for Women with Distinguished Contributions, the Andre and Bella Meyer Lectureship, the Career Development Chair at the Technion, the Muriel & David Jacknow Award for Excellence in Teaching, and the Technion’s Award for Excellence in Teaching (2 times). She received several best paper awards and best demo awards together with her research students and was selected as one of the 50 most influential women in Israel. She is a member of the Young Israel Academy of Science and Humanities and the Israel Committee for Higher Education, and an IEEE Fellow. She is the Editor in Chief of Foundations and Trends in Signal Processing, and a member of several IEEE Technical Committees. In the past, she was a Signal Processing Society Distinguished Lecturer, and served as an associate editor for several journals. She is author of the book "Sampling Theory: Beyond Bandlimited Systems" and co-author of the books "Compressed Sensing" and "Convex Optimization Methods in Signal Processing and Communications", all published by Cambridge University Press.

Machines that learn: big data or explanatory models?

Andrew Blake (The Alan Turing Institute, London)

A leading question about machines that learn is whether they will turn out to depend more on probabilistic models that explain the data, or on networks that react to data and are trained  at scale? In machine vision systems, for instance, this boils down to a choice between two paradigms: analysis-by-synthesis versus empirical recognisers. Each has its strengths, and empirical recognisers especially have made great strides in performance in the last few years, through deep learning. We can speculate about how deeply the two approaches may eventually be integrated, and on the progress that has already been made with integration.


Professor Andrew Blake is Director of the Alan Turing Institute. Prior to joining the institute in 2015, Professor Blake held the position of Microsoft Distinguished Scientist and Laboratory Director of Microsoft Research Cambridge, England. He joined Microsoft in 1999 as a Senior Researcher to found the Computer Vision group. In 2008 he became a Deputy Managing Director at the lab, before assuming the directorship in 2010. Before joining Microsoft Andrew trained in mathematics and electrical engineering in Cambridge England, and studied for a doctorate in Artificial Intelligence in Edinburgh. He was an academic for 18 years, latterly on the faculty at Oxford University, where he was a pioneer in the development of the theory and algorithms that can make it possible for computers to behave as seeing machines.

Professor Blake has published several books including "Visual Reconstruction" with A.Zisserman (MIT press), "Active Vision" with A. Yuille (MIT Press) and "Active Contours" with M. Isard(Springer Verlag). He has twice won the prize of the European Conference on Computer Vision, with R. Cipolla in 1992 and with M. Isard in 1996, and was awarded the IEEE David Marr Prize (jointly with K. Toyama) in 2001.

In 2006 the Royal Academy of Engineering awarded him its Silver Medal and in 2007 the Institution of Engineering and Technology presented him with the Mountbatten Medal(previously awarded to computer pioneers Maurice Wilkes and Tim Berners-Lee, amongst others.) He was elected Fellow of the Royal Academy of Engineering in 1998, Fellow of the IEEE in 2008, and Fellow of the Royal Society in 2005. In 2010, Andrew was elected to the council of the Royal Society. In 2011, he and colleagues at Microsoft Research received the Royal Academy of EngineeringMacRobert Award for their machine learning contribution to Microsoft Kinect human motion-capture. In 2012 Andrew was elected to the board of the EPSRC and also received an honorary degree of Doctor of Science from the University of Edinburgh. In 2013 Andrew was awarded an honorary degree of Doctor of Engineering from the University of Sheffield. In 2014, Andrew gave the prestigious Gibbs lecture at the Joint Mathematics Meetings (transcript available here). Professor Andrew Blake has been named as the recipient of the 2016 BCS Lovelace Medal, the top award in computing in the UK, awarded by BCS, The Chartered Institute for IT. The award is presented annually to individuals who, in the opinion of BCS, have made a significant contribution to the advancement of Information Systems.

You can have it all: Rapid, robust, and rigorous algorithms for bilinear problems in signal processing and communications

Thomas Strohmer (University of California, Davis)

Bilinear problems pervade many areas of science and technology, including signal processing, radar, astronomy, medical imaging, optics, and wireless communications. Despite this omnipresence, efficient and reliable algorithms for their solution are still lacking. I will discuss several recent breakthrough results in the development of powerful algorithms for a range of bilinear problems. One example concerns the notorious blind deconvolution problem. Here, blind deconvolution refers to the following problem: Assume we are given a function y which arises as the convolution of two unknown functions g and h. When and how is it possible to recover g and h from the knowledge of y? I will present the first numerically efficient blind deconvolution algorithm that comes with rigorous convergence guarantees. (As an aside, I will tell how I once failed to catch a murderer--dubbed the "graveyard murderer" by the media--because I failed in solving a blind deconvolution problem.) We will also consider more general bilinear problems, such as self-calibration problems. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. Applications in array processing, imaging, wireless communications and the future Internet-of-Things will be presented.


Thomas Strohmer is Professor of Mathematics at the University of California, Davis. His research interests are in applied harmonic analysis, numerical analysis, signal- and image processing, high-dimensional data analysis, and mathematics of information. He got his M.S. and Ph.D. in Mathematics in 1991 and 1994 respectively from the University of Vienna, Austria. He spent one year as Erwin-Schroedinger fellow at the Department of Statistics at Stanford University in 1997 before joining the University of California, Davis in 1998. His recent awards include the 2013 IEEE Signal Processing Society Best Paper Award and the 2014 SIAM Outstanding Paper Prize. Dr. Strohmer is on the editorial board of several journals. He also serves as consultant to industry in the areas of telecommunications, bioengineering, and signal- and image processing.