TUTORIALS

All tutorials will be held on Tuesday, 22 July 2014. There are three concurrent tutorials in the forenoon (AM) and four concurrent tutorials in the afternoon (PM). Each tutorial consists of two sessions with a break in between. All tutorials will be held in the JN Tata Auditorium building.

TUTORIALS

 

TITLE

 

PRESENTERS VENUE
 T1-AM

 

Semidefinite Relaxation: Theory, Algorithms, and Applications [Slides]

 

 Kunal Chaudhury  JN Tata Auditorium
 T2-AM

 

Rate-distortion Theory for Secrecy Systems [Slides]

 

 Paul Cuff   Hall A
 T3-AM

 

Energy/ Speed/ Reliability/ Accuracy trade-offs in Information Processing. [Slides]

 

 Manoj Gopalkrishnan  Hall B
T4-PM

 

Distributed Computations on Networks [Slides]

 

Moez Draief JN Tata Auditorium
T5-PM

 

Consistency of Estimation in the "big data" Regime: Balancing between Uniform and Pointwise Convergence [Slides 1]  [Slides 2]

 

Narayan Santhanam Hall A
T6-PM

 

Computational Imaging and Illumination for Vision Applications

[Slides 1]  [Slides 2] [Slides 3] [Slides 4] [Slides 5] [Slides 6

Ashok Veeraraghavan Hall B
T7-PM

 

 Deep Neural Networks for Speech and Language Processing

(CANCELLED)

 

Bhiksha Raj and Rita Singh Hall C

 

 AM TUTORIALS  PM TUTORIALS

 

Session 1:  09:30 - 11:00 am

 

     Session 3:   2:30 - 4:00 pm       

 

Break:      11:00 - 11:30 am

 

   Break :      4:00 - 4:30 pm

 

      Session 2:   11:30am - 1:00 pm     

 

 Session 4:   4:30 - 6:00 pm

 

Title: Energy/ Speed/ Reliability/ Accuracy trade-offs in Information Processing.

Speaker:  Manoj Gopalkrishnan, TIFR Mumbai


 

Bio: Manoj Gopalkrishnan received his B. Tech. in Computer Science and Engineering from IIT Kharagpur in 2003, followed by a Ph. D. in Computer Science at University of Southern California in 2008. During his Ph. D., he was a member of the Laboratory for Molecular Science, and worked on DNA self-assembly and Reaction Network Theory. In 2009, he joined the School of Technology and Computer Science at TIFR. He is a recipient of the Ramanujan fellowship.

 Abstract: Our daily experiences with life and silicon constantly remind us that information processing requires energy. In contrast, careful theoretical consideration over the past forty years have provided only trivial physical lower bounds to the energy cost of computation and the cost of transmitting information over a channel, accompanied by suggestions that computation can be done for arbitrarily little energy per step. We take the point of view that this state of affairs has resulted because of inadequate problem formulation that ignores essential semantic aspects of information processing. By restricting our attention to the case of a single bit, we make progress towards a better formulation of the problem of minimum energy requirement for bit operations like switching, erasing, copying, and communicating. We give precise definitions for the notion of a bit, and for these bit operations. Our formulation simultaneously takes into account requirements of speed, reliability, and accuracy. We argue that all these operations require substantial energy cost. We also argue limits on the efficiency of charging batteries.

 


Title: Rate-distortion Theory for Secrecy Systems

Speaker: Paul W. Cuff, Princeton University


 

Bio: Paul Cuff is an Assistant Professor of Electrical Engineering at Princeton University since 2009. He currently advises four PhD students working on information theory and co-advises two PhD students working on recommendation and voting systems. Dr. Cuff received his B.S. degree in electrical engineering from Brigham Young University in 2004 and M.S and Ph.D. degrees in electrical engineering from Stanford University in 2006 and 2009. He was awarded the ISIT 2008 Student Paper Award for his work titled "Communication Requirements for Generating Correlated Random Variables" and was a recipient of the National Defense Science and Engineering Graduate Fellowship and the Numerical Technologies Fellowship as a graduate student.

Abstract: This tutorial covers an aspect of information theoretic security pertaining to source coding. We will generalize the celebrated "rate-distortion theory" to secrecy systems, measuring performance by distortion at the intended receiver and at the adversary. Many surprises are encountered along the way. For example, access to past source information at the adversary has a dramatic effect on the theory. This is in contrast to rate-distortion theory without security concerns, where strictly causal side information does not change the rate-distortion function. The tutorial covers a couple of proof techniques that are used in this theory and have use elsewhere in source coding and information theoretic security. One such technique is the use of the principle of soft covering (sometimes referred to as channel resolvability), which states that a random codebook of sufficient size will induce an i.i.d. channel output when a codeword is selected at random and passed through the channel. The soft covering principle finds many uses in information theoretic security, including a simple proof of strong secrecy in the wiretap channel. Another tool is the "likelihood encoder." This encoder is non-deterministic and is not defined by jointly typical sets. When the likelihood encoder is combined with the soft-covering principle, they produce remarkably simple achievability proofs for many known source coding results, and they provide an analysis that is precise enough to yield rate-distortion theory for secrecy systems.

 


 Title: Distributed Computations on Networks

Speaker: Moez Draief, Imperial College

 


 

 Bio:  Moez Draief graduated from the Ecole Polytechnique (Paris) in 2000. He then completed a DEA in Probability Theory at the University Paris VI. He undertook a Ph.D. in the LIAFA (Theoretical Computer Science Group), University Paris VII. From October 2004 to January 2007, he was a Marie Curie research fellow at the Statistical Laboratory and a lecturer in Part III (Certificate of Advanced Study in Mathematics), Cambridge University. Since 2007, he is a Faculty member in the EEE department, Imperial College London, where he is now an Associate Professor. His main research interests are the analysis of distributed algorithms, and networks economics and dynamics. It involves the mathematical analysis of models (especially from probability and algorithms) to study large networks that arise in online systems. 

 Abstract: Due to sheer size of the networks we are dealing with today, many of the operations that were routinely performed by a central entity have to be adapted to be deployed in a distributed fashion, i.e. the nodes interact among themselves to perform the computation. We will illustrate the challenges and some of the recent developments to solve this important problem. To this end, I will discuss relevant aspects of spectral graph theory, and randomized distributed algorithms such random walks and gossiping.

 


Title: Semidefinite Relaxation: Theory, Algorithms, and Applications

Speaker: Kunal Chaudhury, IISc Bangalore


 Bio: Kunal Chaudhury is currently an Assistant Professor in the Department of Electrical Engineering at the Indian Institute of Science. Prior to joining IISc, he was a research associate in the Program in Applied and Computational Mathematics at Princeton University. Kunal’s research interest is in inverse problems in image acquisition and processing, sensor network localization, protein structure calculation, numerical optimization, and fast/scalable algorithms for signal and image processing. Kunal has a B.E. in Electrical Engineering from Jadavpur University, an M.E. in System Science and Automation from IISc, and a Ph.D. in Computers, Communication, and Information Systems from EPFL Switzerland.

 

Abstract: Convex relaxations of hard computational problems has emerged as an important and versatile tool in various scientific and engineering applications. The roots of convex relaxation can be traced back to the 80s and 90s in the applied math and theoretical computer science community. However, it is the dramatic advances in convex optimization made in last few decades (both in terms of theory and the availability of efficient off-the-shelf solvers) that has made convex relaxations a success story. In particular, the idea of compressed sensing, and more generally sparse signal processing, has captured the attention of the signal processing, communications, and information theory community in last decade or so. This tutorial is about a form of convex relaxation called semidefinite relaxation (SDR). Broadly speaking, this involves the reduction of a difficult non-convex (possibly NP-hard) problem into a semi-definite program, which can in turn be solved in polynomial time using, for example, standard interior-point solvers. In principle, the framework of SDR is sufficiently broad to cover almost all forms of convex relaxations. The tutorial will begin with a concise overview of the theory and algorithms in SDR. We will then quickly move on to some recent scientific and engineering applications of SDR. In particular, we will consider the applications of SDR to MIMO decoding, phase-retrieval, sensor network localization, and distributed registration, and discuss some of the recent advances and results.

 

 


 Title: Consistency of Estimation in the "big data" Regime: Balancing between Uniform and Pointwise Convergence

Speaker: Narayana P. Santhanam,  University of Hawaii at Manoa


 

Bio: Narayana Santhanam is an Assistant Professor at the University of Hawaii since 2009. He obtained his B.Tech from the Indian Institute of Technology, Chennai (then Madras) in 2000; MS and PhD from the University of California, San Diego in 2003 and 2006 respectively. From 2007-2008, he held a postdoctoral position at the University of California, Berkeley.

His research interests lie in the intersection of information theory and statistics, with a focus on the undersampled/high dimensional regime and including applications to finance, biology, communication and estimation theory. He is a recipient of the 2006 Information Theory Best Paper award from the IEEE Information Theory Society along with A. Orlitsky and J. Zhang; as well as an organizer of several workshops on high dimensional statistics and "big data" problems over the last five years. He is currently a member of the National Science Foundation Center for Science of Information (a Science and Technology Center), and leads the big data/data analytics cluster at the University of Hawaii College of Engineering.

 Abstract:  Today, data accumulated in many biological, financial, and other statistical problems stands out not just because of its nature or size, but also because the questions we ask of it are unlike anything we asked before.  There is often a tension in these big data problems between the need for rich model classes to better represent the application and our ability to handle these classes at all from a mathematical point of view.

Consider an example of insuring the risk of exposure to the Internet as opposed to the simple credit monitoring tools available today. Given the significant number of identity thefts, security breaches, and privacy concerns, insurance of this nature may be highly desirable. How would one model loss here? After all, losses suffered can range from direct loss of property to more intangible, yet very significant damage resulting from lowered credit scores.  Designing insurance policies with ceilings on claim payments keeps us in familiar territory mathematically, but also misses the point of why one may want this sort of insurance. We therefore want a richer set of candidate loss models that do not impose artificial ceilings on loss.
 
But we will run into a fundamental roadblock here.  Richness of model classes is often quantified by metrics such as the VC-dimension, the Rademacher complexity, or the strong compression redundancy. Typically, one looks for estimation algorithms with model-agnostic guarantees based on the sample size---indeed this is the uniform consistency dogma that underlies most formulations of engineering applications today. But any such guarantee on estimators on a model class depends on the complexity metrics above---the more complex a class, the worse the guarantees.
 
In fact, the insurance problem above and many applications in the ``big data'' regime force us to consider model classes that are too complex to admit estimators with reasonable model-agnostic guarantees (or uniformly consistent estimators). Instead the best we can often do is to have guarantees dependent on not just the sample size but on the underlying model in addition (pointwise consistent). This is not very helpful either---our gauge of how well the estimator is doing is dependent on the very quantity being estimated!
 
We therefore challenge the dichotomy of uniform and pointwise consistency in the analysis of statistical estimators. Neither uniform nor pointwise guarantees are particularly suited to the big data problems we have in mind. The former precludes the desired richness of model classes. While the latter allows for rich model classes, it does not provide practical guarantees that can be used in applications.

Instead, we consider a new paradigm positioned in between these two extremes. This framework modifies the world of pointwise consistent estimators---keeping as far as possible the richness of model classes possible but ensuring that all information needed about the unknown model to evaluate estimator accuracy can be gleaned from the data. We call this data-driven pointwise consistency. To bring focus into the theoretical framework, we will formulate and characterize this approach for prediction and compression over countably infinite alphabets.

 


Title: Computational Imaging and Illumination for Vision Applications

Speaker: Ashok Veeraraghavan, Rice University


 

Bio: Ashok Veeraraghavan is currently Assistant Professor of Electrical and Computer Engineering at Rice University, Tx, USA. Before joining Rice University, he spent three wonderful and fun-filled years as a Research Scientist at Mitsubishi Electric Research Labs in Cambridge, MA. He received his Bachelors in Electrical Engineering from the Indian Institute of Technology, Madras in 2002 and M.S and PhD. degrees from the Department of Electrical and Computer Engineering at the University of Maryland, College Park in 2004 and 2008 respectively. His thesis received the Doctoral Dissertation award from the Department of Electrical and Computer Engineering at the University of Maryland. He loves playing, talking, and pretty much anything to do with the slow and boring but enthralling game of cricket.

At Rice University, Prof. Veeraraghavan directs the Computational Imaging and Vision Lab. His research interests are broadly in the areas of computational imaging, computer vision and robotics.

Abstract: This is an exciting time to be in computational vision. I will begin the talk with a short description of the three technological changes that are driving this excitement and my optimism:

(a) The explosive growth of cameras in mobile phones (and the impending growth of projectors in mobile phones) has resulted in billions of people having immediate personal access to cameras/projectors with associated computing platforms

(b) The recent breakthroughs in signal processing, especially compressive sensing and dictionary learning that allow us to do more with less and

(c) The amalgamation of ideas from vision, graphics and optics into the ‘new’ field of computational photography, leveraging on physics-based-models and active and programmable imaging techniques.

I will then provide a brief overview of the current research focus of our lab at Rice University where we leverage upon all these three driving forces in an attempt to improve the flexibility, functionality and resolution of imaging and vision systems.

In the second half of my talk, I will focus on the core themes of interests in my lab and our core capabilities highlighting a few results from recent projects. Most of these projects benefit from a carefully designed interplay between the recent advances in following areas: computer vision, optics, computational photography, optimization and compressive sensing.