Monday, Jan 25, 2016

Sunday, April 10, 2016

Saturday, April 30, 2016

** Title: Difference Sets and Structured Quadratic Samplers for High Resolution Imaging**** **

** **

**Speaker: **Piya Pal

Assistant Professor, Dept. of Electrical&Computer Engineering

University of Maryland, College Park

** ****Abstract:** High Resolution source localization is an important problem arising in
applications
across multiple disciplines. The goal is to identify the location of point
sources from
plane waves captured at an array of sensors in the form of a space-time
series. By
associating a q th (qâ‰¥2) order difference set corresponding to the
physical sensor locations,
I will demonstrate its role in source localization from q th order
statistics of the
spatiotemporal data. In particular, difference sets will be shown to
naturally appear in a
certain quadratic function of the sampling operator, which plays a crucial
role in
determining the structure of the covariance matrix of the received
signals. By exploiting
the additional degrees of freedom in the difference set, is possible to
localize O(M q )
sources with only M sensors, by using new geometries for spatial samplers.
The
difference set also plays important role in determining the robustness of
these quadratic
samplers against perturbation or calibration errors. In particular, the
redundancies present
in the difference set can be exploited to ensure identifiability of source
parameters in
presence of such errors. Finally, the role of 2 nd order difference sets
in complex phase
retrieval will be demonstrated, inspiring the design of a new class of non
uniform Fourier
based sampler, that can provably recover a complex signal (upto a global
phase
ambiguity) from its amplitude measurements with near-minimal number of
samples. An
interesting connection with the 4N-4 conjecture will also be established,
which
hypothesis 4N-4 to be the minimum number of generic measurements necessary
for
injectivity for phase retrieval of an N dimensional complex signal.

** Title: Hypothesis Testing in the High Privacy Limit**** **

** **

**Speaker: **Lalitha Sankar

Assistant Professor, Department of Electrical, Computer, and Energy

Arizona State University

** ****Abstract:** Binary hypothesis testing under the
Neyman-Pearson setup is a statistical inference formulation for
distinguishing data generated by two different source classes. Privacy
restrictions may require the curator of the data or the data respondents
themselves to share data with the test only after applying a randomizing
privacy mechanism. Using mutual information as the privacy metric and the
relative entropy between the two distributions of the output
(post-randomization) source classes as the utility metric, this work
focuses on finding the optimal mechanism that maximizes the chosen utility
function while ensuring that the mutual information based leakage for both
source distributions is bounded. The resulting utility-privacy tradeoff
problem is that of finding the randomizing privacy mechanism that maximizes
the relative entropy for a given pair of constraints on the mutual
information of each source and its output. Focusing on the high privacy
regime, an Euclidean information-theoretic (E-IT) approximation to the
tradeoff problem is presented. It is shown that the solution to the E-IT
approximation is independent of the alphabet size and clarifies that a
mutual information based privacy metric exploits the source statistics and
preserves the privacy of the source symbols in inverse proportion to their
likelihood.

** Title: Remotely Powered Communications
**** **

** **

**Speaker: **Ayfer Özgur

Assistant Professor, Electrical Engineering Department

Stanford University

** ****Abstract:** The advancements in radio frequency power transfer over the recent decades
have enabled efficient wireless power transfer over longer distances and
many future "Internet of Things" applications are expected to rely on
wireless powering. Motivated by these developments, we study communication
with a remotely powered transmitter. We propose an information-theoretic
model where a charger can dynamically decide how much power to transfer to
the transmitter based on its side information regarding the communication,
while the transmitter needs to dynamically adopt its coding strategy to its
instantaneous energy state, which in turn depends on the actions previously
taken by the charger. We characterize the capacity as n-letter
mutual information rate under various levels of side information
available at the charger. In some special cases, motivated by
different settings of practical interest, we simplify these expressions
to single-letter form, or provide an algorithm to efficiently
compute capacity using dynamic programming. Our results provide
some surprising insights on how side information at the charger can be used
to increase the overall capacity of the system.
This is joint work with Dor Shaviv (Stanford) and Haim Permuter (Ben Gurion
University).

** Title: Recent results in content type coding **** **

** **

**Speaker: **Christina Fragouli

Professor, Electrical Engineering

UCLA

** ****Abstract:** Content type coding starts from the observation that, in many cases, we do
not need to serve specific messages, but rather, any message within a
content-type. Content-type traffic pervades a host of applications today,
ranging from search engines and recomender networks to newsfeeds and
advertisement networks. In our work we ask a novel question: if there are
benefits in designing network and channel codes specifically tailored to
content-type requests. We provide theoretical bounds and algorithms, and
show that in some cases we can achieve exponential benefits in polynomial
time.

** Title: On nanopore sequencing
**** **

** **

**Speaker: **Suhas Diggavi

Professor, Electrical Engineering

UCLA

** ****Abstract:** While DNA sequencing technology has undergone a major revolution, the
read lengths afforded by most popular sequencing technologies still remain
small (in the range of 100-200). A promising alternative to short read
sequencers is provided by nanopore sequencing, where DNA is passed through
a nanopore and the induced current variations due to differing nucleotides
can be read accurately from the nanopore. This technology has the potential
to read long DNA fragments, potentially even allow the sequencing of the
entire genome at one shot. However, there are many algorithmic challenges
in realizing this potential, due to many transformations between the DNA
nucleotide and the the nanopore current. In this talk, we propose a
mathematical model for nanopore sequencing and develop some algorithms and
analysis for nanopore sequencers.

** Title: Speed Control in a Multiple Access Server under Delay Constraints**** **

** **

**Speaker: **Sibi Raj Pillai

Associate Professor, Department of Electrical Engineering

IIT Bombay

** ****Abstract:** A set of distributed queues served by a MAC server is
considered. The
objective is to design optimal packet schedulers and server-speed (rate)
adaptation
techniques to meet the constraints on average or maximal delay, and
transmit power.
For the maximal delay metric, schedulers minimizing the average power
can be
identified by an iterative procedure. The average packet delay case is
closely connected
to service rate controlled queues, this can be exploited to obtain
efficient communication
schemes using the shortest remaining processing time first (SRPT) policy.

Shun Watanabe

** Title: Sensitivity of Functions and Distributed Computing**** **

** **

**Speaker: **Shun Watanabe

Associate Professor, Department of Computer and Information Sciences

Tokyo University of Agriculture and Technology, Japan

** ****Abstract:** We consider a distributed coding problem for function computation.
For i.i.d. sources with positivity condition, it was shown by
Ahlswede-Csiszar that,
if a function to be computed is sensitive, then the communication rate
that is needed
to compute that function is as large as reproducing the entire source. In
this talk,
we will discuss various generalisation of this claim, which leads to a
dichotomy
theorem that is different from Han-Kobayashiâ€™s dichotomy theorem.
Furthermore,
we will discuss the optimal communication rate for some class of functions
that are not sensitive. This talk is based on a joint work with Shigeaki
Kuzuoka.

** Title: Index Coding and Local Partial Clique Covers Index coding**** **

** **

**Speaker: **Arya Mazumdar

Assistant Professor, Department of Computer Science

University of Minnsota

** ****Abstract:** Index Coding and Local Partial Clique Covers Index coding, or
broadcasting with side information, is a network coding problem of most
fundamental importance. In this problem, given a directed graph, each
vertex represents a user with a need of information, and the neighborhood
of each vertex represents the side information availability to that user.
The aim is to find an encoding to minimum number of bits (optimal rate)
that, when broadcasted, will be sufficient to the need of every user. Not
only the optimal rate is intractable, but it is also very hard to
characterize with some other well-studied graph parameter or with a simpler
formulation, such as a linear program. Recently there have been a series of
works that address this question and provide explicit schemes for index
coding as the optimal value of a linear program with rate given by
well-studied properties such as local chromatic number or partial
clique-covering number. There has been a recent attempt to combine these
existing notions of local chromatic number and partial clique covering into
a unified notion denoted as the local partial clique cover (Arbabjolfaei
and Kim, 2014). We present a generalized novel upper-bound (encoding
scheme) - in the form of the minimum value of a linear program - for
optimal index coding. Our bound also combines the notions of local
chromatic number and partial clique covering into a new definition of the
local partial clique cover, which outperforms both the previous bounds, as
well as beats the previous attempt to combination.

** Title: **** **

** **

**Speaker: **Abhay Karandikar

Professor, Department of Electrical Engineering

IIT Bombay

** Title: Online Learning of Power Allocation Policies in Energy Harvesting
Communications**** **

** **

**Speaker: **Bhaskar Krishnamanchari

Professor, Department of Electrical Engineering

USC

** ****Abstract:** Online Learning of Power Allocation Policies in Energy Harvesting
Communications We consider the problem of power allocation over a
time-varying channel with an unknown distribution in energy harvesting
communication systems. In this problem, the transmitter needs to choose its
transmit power based on the amount of stored energy in its battery with the
goal of maximizing the average rate obtained over time. We model this
problem as a Markov decision process (MDP) with the transmitter as the
agent, the battery status as the state, the transmit power as the action
and the rate obtained as the reward. The average reward maximization
problem over the MDP can be solved by a linear program (LP) that uses the
transition probabilities for the state-action pairs and their mean rewards
to choose a power allocation policy. Since the rewards associated the
state-action pairs are unknown, we propose an online learning algorithm
UCLP that learns these rewards and adapts its policy with time. The UCLP
algorithm solves the LP at each time-step to choose its policy using the
upper confidence bounds on the rewards. We prove that the reward loss or
regret incurred by UCLP is upper bounded by constant.

** Title: A New Competitive Ratio for Network Applications**** **

** **

**Speaker: **I-Hong Hou

Assistant Professor,Department of Electrical and Computer Engineerin

Texas A&M University

** ****Abstract:** A New Competitive Ratio for Network Applications with Hard Performance
Guarantees Network applications in highly mobile systems need to employ
online algorithms that do not rely on precise predictions about future
events. In order to meet hard performance guarantees in the presence of
unknown future events, common practice is to add redundancy to the systems.
In this paper, we define a new competitive ratio that reflects the amount
of redundancy needed to ensure some given performance guarantees. We study
two special applications, namely, online job allocations in cloud computing
and online scheduling in delayed mobile offloading, and propose online
algorithms for each of them. We prove that our algorithms are optimal. By
comparing our algorithms with commonly used other policies, we also show
that our policies need much less redundancy than those commonly used
policies to provide the same degree of performance guarantees.

** Title: Conditional Dependence Measures: Axioms, Estimators and Biological
Applications**** **

** **

**Speaker: **Sreeram Kannan

Assistant Professor, Department of Electrical Engineering

Univerity of Washington, Seattle

** ****Abstract:**We conduct an axiomatic study of the problem of estimating the
strength of a known causal relationship between a pair of continuous
variables. We propose that an estimate of causal strength should be based
on the conditional distribution of the effect given the cause (and not on
the driving distribution of the cause), and study dependence measures on
conditional distributions. Shannon capacity, appropriately regularized,
emerges as a natural measure under these axioms. We examine the problem of
calculating Shannon capacity from samples and design a simple, consistent
and efficient fixed k-nearest neighbor estimator. The estimators have a
direct application in inferring information flow in genetic circuits and
are shown to outperform state-of-the-art estimators in single cell flow
Cytometry.

** Title: The epsilon-Capacity Region of AWGN Multiple Access Channels**** **

** **

**Speaker: **Vincent Y.F. Tan

Assistant Professor, Department of Electrical and Computer Engineering

NUS

** **** Abstract:**we establish the epsilon-capacity region for the AWGN multiple
access channel (MAC) with feedback under an expected power constraint by
combining ideas from binary hypothesis testing, information spectrum
analysis, Ozarow's coding scheme, and power control.
This talk is based on joint work with Lan V. Truong and Silas L. Fong (NUS)

** Title: Analysis of Double Covers of Factor Graphs Many quantities of interest in
communications **** **

** **

**Speaker: **Pascal Vontobel

Associate Professor, Department of Information Engineering

The Chinese University of Hong Kong

** ****Abstract:** Analysis of Double Covers of Factor Graphs Many quantities of interest in
communications (like the constrained capacity of a storage system) and
other areas can be expressed as the partition sum of some factor graph.
Although the exact calculation of the partition sum is in many cases
intractable, it can often be approximated rather well by the Bethe
partition sum. In earlier work, we have shown that graph covers are a
useful tool for expressing and analyzing the Bethe approximation. In this
paper, we present a novel technique for analyzing double covers, a
technique which ultimately leads to a deeper understanding of the Bethe
approximation.

** Title: Efficient CSMA Based on Kikuchi Approximation
**** **

** **

**Speaker: **Krishna Jagannathan

Assistant Professor, Department of Electrical Engineering

IIT Madras

** ****Abstract:** CSMA algorithms based on Gibbs sampling can achieve throughput
optimality if appropriate attempt rates are employed. However, the problem
of computing these attempt rates is NP-hard. Inspired by the well-known
Kikuchi approximation, we propose a simple distributed algorithm, which
obtains closed form estimates for the attempt rates. We also prove that our
algorithm is exact for a class of graphs, called the chordal graphs.
Numerical results suggest that the proposed algorithm outperforms the
existing Bethe approximation based algorithms for spatial networks.

** Title: A Fast Algorithm for Demixing Signals with Structured Sparsity Demixing**** **

** **

**Speaker: **Chinmay Hedge

Assistant Professor, Department of Electrical and Computer Engineering

Iowa State University

** ****Abstract:** A Fast Algorithm for Demixing Signals with Structured Sparsity Demixing,
or source separation, involves disentangling a complicated signal into
simpler, more informative components. Algorithms for signal demixing impact
several applications, ranging from interference cancellation in
communication signals, foreground-background separation in images and
video, and outlier suppression in machine learning. Central to several
modern demixing algorithms is an assumption of some form of low-dimensional
structure in the signal components. However, the majority of these modern
algorithms are based on convex optimization, and their computational
complexity can be high (polynomial) in terms of the signal size. In this
paper, we propose a new algorithmic approach for signal demixing based on
structured sparsity models. Our approach leverages recent advances in
constructing sparsity models via appropriately chosen graphs; this
graphical approach can be shown to model a diverse variety of
low-dimensional structures in signals. Despite being highly nonconvex, we
show that our algorithm exhibits a nearly-linear running time, and
therefore is scalable to very high-dimensional signals. We supplement our
proposed algorithm with a rigorous theoretical analysis, providing
sufficient conditions for provably reconstruction of the underlying
components. Finally, we demonstrate the validity of the method via
numerical experiments on real 2D image data.

** Title: Testing Against Independence with Multiple Decision Centers **** **

** **

**Speaker: **Michèle Angela Wigger

Assistant Professor, Communications and Electronics Department

Telecom ParisTech

** ****Abstract:** Testing Against Independence with Multiple Decision Centers We consider a
three-terminal distributed hypothesis testing against independence problem.
Two terminals act as decision centers and have to decide whether their
observed sequences are independent of the sequence observed at the third
terminal. The third terminal communicates with the two decision centers
over rate-limited noise-free pipes. We characterize the optimal exponential
decays of the type-II error probabilities at the two decision centers given
that the type-I error probabilities vanish for increasing blocklengths.
This exponential decays are functions of the maximum rates allowed over the
noise-free communication pipes.

** Title: Recent results in multi-version codes, update-efficient codes and
consistent shared memory emulation**** **

** **

**Speaker: **Viveck Cadambe

Assistant Professor, EE Department

Pennsylvania State University

** ****Abstract:** In several modern applications of distributed storage systems including
implementation of key value stores, it is important to ensure consistency.
Consistency is the notion that, as the data stored gets updated, a client
trying to read the data should get the most recent version of the data. The
problem of ensuring consistent access to data despite frequent updates in a
distributed storage system is well studied in distributed systems theory,
and is often referred to as *shared memory emulation* in literature.
In information theory, the *multi-version coding* framework has been
proposed recently as a method of analyzing storage costs in systems where
consistency is important. In this talk, we review the multi-version coding
framework and present some new results. First, we use simple ideas related
to Slepain-wolf coding and update efficient codes to obtain significant
storage cost savings when successive data versions are correlated. As a
byproduct, we settle a conjecture related to the update efficiency of a
good code for the erasure channel. Finally, we describe results that
explicitly connect the multi-version coding model to the shared memory
emulation model.

** Title: Zero Rating: Can ISPs Choose Winning Content?**** **

** **

**Speaker: **Jayakrishnan U Nair

Assistant Professor, Department of Electrical Engineering

IIT Bombay

** ****Abstract:** Services with discriminatory pricing of content by Internet Service
Providers (ISPs) have resulted in significant controversy in the recent
past, in India and around the world. In our public discourse, conflating
discriminatory pricing with net neutrality (technically, non-discriminatory
treatment of packets in the network) has added intensity to the debate.
However, the effect of such pricing, while still being non-discriminatory
in carriage, has not been well understood. This work addresses that gap.
We focus on zero rating, a popular model of discriminatory pricing in which
a certain basket of `zero-rated' Internet services is made available to
customers for free. These zero-rated services are in turn sponsored by
payments to the ISP by the corresponding content providers (CPs). We
analyse a stylised model with one ISP and two CPs offering comparable
content, each of whom can choose to either sponsor or not sponsor their
service. The price for the customer and the price of sponsorship is
determined by the ISP. Our analysis reveals that zero-rating grants the ISP
significant power to `shape' the market in the sense that it can set prices
to create content monopolies. This power can be exercised to maximise
profits or for other purposes. Indeed, we show that the ISP has an
incentive to skew the market, creating a near monopoly for one of the CPs.
Interestingly, profit maximisation is achieved through a single `sponsoring
price' which can appear to be non-discriminatory.
(Based on joint work with Kunal Phalak and D. Manjunath)

** Title: Completely blind sensing of multi-band signals**** **

** **

**Speaker: **Massimo Franceschetti

Professor, Electrical and Computer Engineering

UCSD

** ****Abstract:** We consider real multi-band signals $f(t)$ observed over an interval
$[-T/2,T/2]$, whose Fourier transform is zero outside $[-\Omega,\Omega]$,
and whose support set in the frequency domain is of measure at most
$2\Omega'$, where $\Omega' \ll \Omega$. Since $f(t)$ is bandlimited, it
can be reconstructed up to arbitrary precision over the observation
interval using a number of measurements $N=\Omega T/\pi + o(T)$, as $T
\rightarrow \infty$. If we have knowledge of the suppport set of the
Fourier transform of $f(t)$, then a theorem of Landau and Widom ensures
that such reconstruction is also possible using only $S=\Omega' T/\pi +
o(T)$ measurements, and this number is minimum in a precise
approximation-theoretic sense. However, assuming we only know that the
total measure of the support set is at most $\Omega'$, without further
investigation we can only conclude that a number of measurements $M$
sufficient to recover $f(t)$ satisfies $S \leq M \leq N$. Determining this
number precisely is the \emph{completely blind sensing} problem, and is an
open question. Usually, partially blind sensing is performed, assuming to
have some partial spectral information available a priori. In this paper,
we show that if $M=c_{\delta} S$, then the signal can be recovered in a
completely blind setting using $M$ measurements, where $c_{\delta}$ is a
constant which depends on the reconstruction accuracy $\delta$. We also
show that such reconstruction is robust to measurement errors.

** Title: Dealing with Unwanted Information in Speech**

** **

**Speaker: **Hynek Hermansky

Professor, Electrical and Computer Engineering

Johns Hopkins University

** ****Abstract:** Besides a message, speech carries information from a number of additional
sources, which introduce irrelevant variability (noise). As discussed in the talk,
such noise comes in several distinct forms. Noise with predictable effects can be
often suppressed analytically, and we discuss some techniques for doing so.
Unpredictable effects of expected noise are typically successfully dealt with by
extensive training of a machine using noisy data. Such multi-style training is
currently the technique of choice in most practical situations, which is often hard
to beat. However, we argue for alternative adaptive multi-stream approaches,
which could in principle also deal with noises that are unexpected. Our current
efforts in this direction are discussed.

** Title: Incorporating Scheduling in Poisson Point Process Model for Cellular Networks**** **

** **

**Speaker: **Radha Krishna Ganti

Assistant Professor, Department of Electrical Engineering

IIT Madras

** ****Abstract:** Owing to its realism and tractability, Poisson Point Process (PPP) has emerged as a popular option for modelling and analyzing cellular networks. The usual approach for the downlink analysis is to consider a typical user at the origin, which connects to one of the base stations as per a predefined cell selection rule, such as connecting to the geographically closest base station. Assuming the rest of the base stations to be interfering, the goal is then to derive the signal-to-interference ratio (SIR) distribution for the typical user. While this approach has been used successfully to evaluate key performance metrics for the typical user, the whole idea of picking up a typical user for analysis is not sufficient to study the performance of scheduling schemes, where the goal is to schedule one user from amongst multiple options based on a certain criteria. Due to the interference-induced correlation, the SIR and other related metrics observed by all the users served by a given base station are correlated across time and space. This is the key technical challenge in incorporating scheduling in a PPP cellular model. In this talk, we discuss the scheduling problem in conjunction with stochastic geometry. In addition to highlighting the analytical challenges, we provide some initial results and list open problems in this area.

** Title: On the Indexability of a Class of Restless, Hidden-Markov Model Multiarm Bandits**** **

** **

**Speaker: **D. Manjunath

Professor, Department of Electrical Engineering

IIT Bombay

** ****Abstract:** We consider a restless multi-armed bandit in which each arm can be in
one of two states, say 0 or 1. Sampling the arm brings it to a known state, say state 0. Not sampling the arm takes it from state 0 to state 1 with probability $p_n$ for arm $n.$. If the arm is in state 1, then it stays in that state with probability one when it is not sampled. The state of an arm is never observed, but when it is sampled, a unit reward is obtained with probability one if the arm is in state 1 and with probability $r_n$ if the arm is in state $0.$ Since the state of an arm is never observed, we only have a belief about the state of the arm. This belief can be updated based on the observed reward when it is sampled and using $p_n$ when it is not sampled. We show that this multi-armed bandit is Whittle-indexable and obtain a closed form expression for the Whittle index for each arm using the belief about the state of the arm. To the best of our knowledge, this class of restless, hidden, Markovian
bandits have not been studied in the literature.
This model is motivated by playlist creation in which each arm is an item, and state 1 represents the state when the user prefers listening to the item. The minimum preferred gap between consecutive plays of item $n$ is governed by $p_n$, while $r_n$ models a residual level of interest in the item in state 0.
[Joint work with Rahul Meshram and Aditya Gopalan]