Home | People | Events | Links | Positions


2015-16
CMI Faculty Lunches

October 15, 2015
Yisong Yue
The Dueling Bandits Problem
    In this talk, I will present the Dueling Bandits Problem, which is an online learning framework tailored towards real-time learning from subjective human feedback. In particular, the Dueling Bandits Problem only requires pairwise comparisons, which are shown to be reliably inferred in a variety of subjective feedback settings such as for information retrieval and recommender systems. I will provide an overview of the Dueling Bandits Problem with basic algorithmic results. I will then conclude by briefly discussing some recent results in ongoing research directions in applications to personalized medicine as well as theoretical connections to algorithmic game theory.
    This is joint work with Josef Broder, Bobby Kleinberg, and Thorsten Joachims.

November 12, 2015
Joel Tropp
Universality laws for randomized dimension reduction
    Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability.
    This talk describes a phase transition in the behavior of the dimension reduction map as the embedding dimension increases. The location of this phase transition is universal for a large class of datasets and random dimension reduction maps. Furthermore, the stability properties of randomized dimension reduction are also universal. These results have many applications in numerical analysis, signal processing, and statistics.
    Joint work with Samet Oymak.

December 10, 2015
John Preskill
Is spacetime a quantum error-correcting code?
    Two of the most amazing ideas in physics are the holographic principle and quantum error correction. The holographic principle asserts that all the information contained in a region of space is encoded on the boundary of the region, albeit in a highly scrambled form. Quantum error correction is the foundation of our hope that large-scale quantum computer can be operated to solve hard problems. I will argue that these two ideas are more closely related than had been previously appreciated.
    I'll describe quantum codes that realize the holographic principle, where the bulk information encoded on the boundary is well protected against errors in which portions of the boundary are removed. These codes provide simplified "toy models" of quantum spacetime, opening new directions in the study of quantum gravity, though many questions remain.
    Reference: http://arxiv.org/abs/1503.06237

January 14, 2016
Venkat Chandrasekaran
Finding Planted Subgraphs using the Schur-Horn Relaxation
    Extracting structured planted subgraphs from large graphs is a fundamental question that arises in a range of application domains. We describe a computationally tractable approach based on convex optimization to recover certain families of structured graphs that are embedded in larger graphs containing spurious edges. Our method relies on tractable semidefinite descriptions of majorization inequalities on the spectrum of a matrix, and we give conditions on the eigenstructure of a planted graph in relation to the noise level under which our algorithm succeeds. (Joint work with Utkan Candogan.)

March 10, 2016
Steven Low
Online optimization of power networks
    We are at the cusp of a historical transformation of our power systems into a more sustainable, dynamic, intelligent, and distributed form. We are interested in the optimization and control of a large network of distributed energy resources in a plug-and-play environment of the future. This requires solving power flow equations which is well-known to be hard. The grid, however, solves them in real-time exactly at scale, and we propose to exploit it explicitly to carry out part of our optimization algorithm. This approach not only improves scalability, but also naturally adapts to evolving network conditions. In this talk, we present two examples.
    The first example presents an online algorithm to solve an optimal power flow problem at a slow timescale on a radial network where the controllable devices continuously interact with the network that implicitly computes a power flow solution given a control action. Collectively these devices and the network implement a gradient projection algorithm in real time. We prove that the proposed algorithm converges to a set of local optima and provide sufficient conditions under which it converges to a global optimum. We derive an upper bound on the suboptimality gap of any local optimum. This bound suggests that any local optimum is almost as good as any strictly feasible point.
    In the second example, the online algorithm integrates primary frequency regulation, secondary frequency regulation, and congestion management at a fast timescale. The algorithm is distributed in that it requires communication only between neighboring buses. Collectively, the controllable devices and the swing dynamics of the network implement a primal-dual algorithm to rebalance power, restore nominal frequency and inter-area flows, and enforce line limits at a minimum control cost. We prove sufficient conditions under which the algorithm converges to a global optimum.
    (Joint work with Changhong Zhao, Enrique Mallada, Lingwen Gan)

April 14, 2016
Michelle Effros
Cooperation for Communication
    In this talk, I will tell a little bit of the unfolding story of cooperation. Using simple models to capture the cost and benefit of cooperation between communicating devices, we seek to understand whether, where, and by how much the benefit of cooperation can exceed its cost.

May 12, 2016
Babak Hassibi
Simple Algorithms and Guarantees for Matrix Completion over GF(2)
    Let X be an n_1 by n_2 matrix with entries in GF(2) and rank r < min (n_1 , n_2). We consider the problem of reconstructing X given only a subset of its entries. This problem has recently found numerous applications, most notably in network and index coding, where finding optimal linear codes (over some finite field GF(q), say) can be reduced to finding the minimum rank completion of a matrix with a subset of revealed entries. The problem of matrix completion over reals also has many applications and in recent years several polynomial-time algorithms with provable recovery guarantees have been developed. However, to date, such algorithms do not exist in the finite-field case. We propose a linear algebraic algorithm, based on inferring low-weight relations among the rows and columns of X to attempt to complete X based on a random subset of its entries. We establish conditions under which the algorithm runs in polynomial time and can successfully complete X with high probability from a vanishing fraction of its entries. We then propose a linear programming-based extension of our basic algorithm, and evaluate it empirically. Our simulations suggest that this linear programming-based method can successfully complete n-by-n random rank r matrices (for n<100 and r<10) from at most 3 log_2 N_{n,r} entries, where N_{n,r} is the number of n-by-n rank r matrices with elements in GF(2).
    This is joint work with James Saunderson and Maryam Fazel.

2014-15 CMI Faculty Lunches

June 4, 2015
Katrina Ligett
Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard
    One of the most appealing aspects of the (coarse) correlated equilibrium concept is that natural dynamics quickly arrive at approximations of such equilibria, even in games with many players. In addition, there exist polynomial-time algorithms that compute exact (coarse) correlated equilibria. In light of this, a natural question is how good are the (coarse) correlated equilibria that can arise from any efficient algorithm or dynamics.
    In this talk, we'll discuss this question. Our main result is strongly negative: we show that in multiplayer games that have a succinct representation, it is NP-hard to compute any coarse correlated equilibrium (or approximate coarse correlated equilibrium) with welfare strictly better than the worst possible. The focus on succinct games ensures that the underlying complexity question is interesting; many multiplayer games of interest are in fact succinct. Our results imply that, while one can efficiently compute a coarse correlated equilibrium, one cannot provide any nontrivial welfare guarantee for the resulting equilibrium, unless P=NP. We show that analogous hardness results hold for correlated equilibria, and persist under the egalitarian objective or Pareto optimality.
    To complement the hardness results, we'll briefly discuss some positive news--we develop an algorithmic framework that identifies settings in which we can efficiently compute an approximate correlated equilibrium with near-optimal welfare. We use this framework to develop an efficient algorithm for computing an approximate correlated equilibrium with near-optimal welfare in aggregative games.
    Joint work with Siddharth Barman.

Apr 9, 2015
Leonard Schulman
Red Queen Dynamics
    I'm going to talk about work in progress about how natural selection may force diversity, and what this has to do with games, multiplicative weight updates, and duality gaps.

Mar 12, 2015
Richard Murray
Analysis, Design and Prototyping of Biomolecular Feedback Systems
    Advances over the past decade have given biological engineers new insights into the role of genetic circuits in nature and the design of biomolecular circuits to implement biological operations in vitro and in vivo. In this talk I will discuss the use of concepts from systems and control engineering as applied to the analysis and design of biological feedback circuits. After a brief survey of relevant concepts from synthetic biology, I will present some recent results that combine modeling, analysis, design and experimental implementation of biological feedback circuits. These results include the role of redundant biological pathways for implementing robust decision-making strategies in cells, the use of biomolecular "breadboards" for prototyping and debugging engineered biomolecular circuits, and the implementation of circuits for regulation of gene expression and biomolecular event detection. Using these results as examples, I will discuss some of the open problems and research challenges in the area feedback control using biological circuits.

Feb 12, 2015
Venkat Chandrasekaran
Relative Entropy Relaxations for Signomial Optimization
    Due to its favorable analytical properties, the relative entropy function plays a prominent role in a variety of contexts in information theory and in statistics. In this talk, I'll discuss some of the beneficial computational properties of this function by describing a class of relative-entropy-based convex relaxations for obtaining bounds on signomials programs (SPs), which arise commonly in many problems domains. SPs are non-convex in general, and families of NP-hard problems can be reduced to SPs. The central idea underlying our approach is a connection between the relative entropy function and efficient proofs of nonnegativity via the arithmetic-geometric-mean inequality.

Jan 8, 2015
Adam Wierman
Data Centers, Energy, and Online Optimization

Dec 11, 2014
Victoria Kostina
Fixed- and variable-length data compression at finite blocklength
    In data compression, a block of source symbols is compressed down into a smaller length string, while ensuring that the original block can be recovered from that encoded string, either exactly or approximately. Identifying the optimum rate-fidelity tradeoffs achievable in principle, regardless of the complexity of the code, is the subject of information-theoretic interest in data compression.
    Classical information theory characterizes rate-fidelity tradeoffs achievable in the limit of infinite length of the source block to be compressed. A perennial question in information theory is how relevant the asymptotic fundamental limits are when the communication system is forced to operate at a given fixed blocklength. The finite blocklength (delay) constraint is inherent to all communication scenarios. In fact, in many systems of current interest, such as real-time multimedia communication, delays are strictly constrained, while in packetized data communication, packets are frequently on the order of 1000 bits. Despite an obvious practical interest in the nonasymptotic fundamental limits, up until recently, such limits were widely believed to defy analysis.
    We show that in both fixed- and variable-length data compression, the nonasymptotic fundamental limit can be closely approximated by a function of just two parameters of the source, namely, the rate-distortion function (Shannon's asymptotic limit), and the rate-dispersion function, a new parameter of the source that characterizes the nonasymptotic behavior of the minimal achievable rate.
    In block-to-block data compression, a source block of k symbols is compressed down to n symbols, and the tradeoff between k, n and the achievable fidelity of reproduction is of interest. For memoryless sources, we show that there is a penalty over the rate-distortion function induced by the finite blocklength requirement, and the size of that penalty is closely governed by the rate-dispersion function. In other words, in block data compression, the rate-dispersion function measures the speed of approach to the rate-distortion function for a given source.
    In fixed-to-variable-length data compression, a block of k source outcomes is represented by a variable-length binary string. The nonasymptotic fundamental limit of interest is the minimum average length of the encoded string compatible with a given fidelity. Perhaps surprisingly, the behaviors of the fundamental limits in fixed- and variable-length settings are starkly different. First of all, in the variable-length setting, the strong converse does not hold, and allowing a nonvanishing, however small, excess distortion probability, one can asymptotically beat the rate-distortion function. Moreover, this asymptotic limit is approached from below , i.e. the shorter the blocklength the lower the achievable average rate! Still, like in fixed-length data compression, only two parameters of the source are required to give an accurate description of that counter-intuitive behavior: its rate-distortion function and its rate-dispersion function.

Nov 12, 2014
Babak Hassibi
Comparison Lemmas: From Slepian to Non-Smooth Optimization
    In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from (possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. I will describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). The genesis of the theory can be traced back to an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes).

Oct 9, 2014
Yisong Yue
Balancing the Explore/Exploit Tradeoff in Interactive Structured Prediction

2012-13 CMI Faculty Lunches

May 9, 2013
Chris Shannon

April 11, 2013

March 14, 2013

February 14, 2013

January 10, 2013

December 13, 2012
Leonard Schulman

November 8, 2012
Chris Umans - Fast Matrix Multiplication Using Coherent Configurations

October 11, 2012
Tracey Ho - Distributed Storage and Online Streaming as a Resource Allocation Problem

2011-12 CMI Faculty Lunches

May 10, 2012
Adam Wierman

April 12, 2012
Federico Echenique

March 8, 2012
Mani Chandy

February 2, 2012
Venkat Chandrasekaran

January 12, 2012
Richard Murray

2010-11 CMI Faculty Lunches

January 13, 2011
Joel Tropp

2008-9 CMI Faculty Lunches

June 4, 2009
Jerry Marsden

May 21, 2009
Richard Murray

April 23, 2009
Joel Tropp

March 19, 2009
Canceled

February 19, 2009
Erik Winfree

January 15, 2009
Adam Wierman

December 11, 2008
Emmanuel Candes

November 13, 2008
Gary Lorden

October 16, 2008
Yaser Abu-Mostafa

2007-8 CMI Faculty Lunches

May 15, 2008
Babak Hassibi

April 24, 2008
Mani Chandy

April 3, 2008
Tracey Ho

March 20, 2008 (rescheduled)
Mani Chandy

February 28, 2008
Mathieu Desbrun

January 17, 2008
Michelle Effros

December 13, 2007
Canceled

November 29, 2007
Chris Umans

October 25, 2007
Peter Schröder

2006-7 CMI Faculty Lunches

April 19, 2007
John Doyle

March 1, 2007
Federico Echenique

February 15, 2007
Robert McEliece

January 18, 2007
Richard Murray

December 14, 2006
Federico Echenique

November 16, 2006
Leonard Schulman

2005-6 CMI Faculty Lunches

May 25, 2006
Erik Winfree

April 20, 2006
Jerry Marsden

March 16, 2006
Babak Hassibi

February 16, 2006
Emmanual Candes

January 26, 2006
Mani Chandy

December 8, 2005
Chris Umans

November 21, 2005
Tracey Ho (Joint with Lee Center)

October 20, 2005
Peter Schröder

2004-5 CMI Faculty Lunches

June 2, 2005
PP Vaidyanathan

May 12, 2005
Gary Lorden

April 21, 2005
Pietro Perona

March 10, 2005
Yaser Abu-Mostafa

February 10, 2005
Mathieu Desbrun

January 20, 2005
Canceled

December 9, 2004
Richard Murray

November 18, 2004
Federico Echenique

October 25, 2004
Babak Hassibi (joint w. Lee Center)

October 14, 2004
Emmanuel Candes

2003-4 CMI Faculty Lunches

June 10, 2004

Bob McEliece

May 27, 2004
Leonard Schulman

May 20, 2004
Matt Jackson
(note change of date)

April 22, 2004
Chris Umans

April 8, 2004
Jerry Marsden

March 11, 2004
Peter Schröder

February 26, 2004
Erik Winfree
(Parsons-Gate 112)

February 12, 2004
Michelle Effros
(Avery Library)



Past Events

Past CMI Faculty Lunches      Past CMI Seminars      Other Events