Home | People | Events | Links | Positions

2018-19
CMI Faculty Lunches

November 8, 2018
Venkat Chandrasekaran
Newton Polytopes and Relative Entropy Optimization
    We discuss how relative entropy is uniquely suited to optimization problems involving sparse polynomials. Our results connect to the work of Descartes (the rule of signs) and Khovanskii (the theory of fewnomials). The Newton polytope associated to a polynomial plays a central role in our development. This is joint work with Riley Murray and Adam Wierman.

December 13, 2018
Omer Tamuz
Equitable Voting Rules
    Joint with Laurent Bartholdi, Wade Hann-Caruthers, Maya Josyula and Leeat Yariv.
    Given n voters and two candidates A and B, a voting rule is a map from {A,B,abstain}^n to {A,B,tie}. An important example is majority, which has a desirable symmetry property: permuting the voters leaves the result unchanged. We study a weakening of this symmetry assumption, which allows for a far richer set of rules that still treat voters equally. We show that these rules can have very small---but not too small---winning coalitions (i.e., a set of voters that can control the election outcome).
    I will discuss the relation to various known results and open problems in the theory of finite groups. No previous knowledge required.

March 14, 2019
Peter Schröder
Shape from Metric
    Joint work with Albert Chern, Felix Knöppel and Ulrich Pinkall (all of TU Berlin Math)
    We study the isometric immersion problem for orientable surface triangle meshes endowed with only a metric: given the combinatorics of the mesh together with edge lengths, approximate an isometric immersion into R^3.
    To address this challenge we develop a discrete theory for surface immersions into R^3. It precisely characterizes a discrete immersion, up to subdivision and small perturbations. In particular our discrete theory correctly represents the topology of the space of immersions, i.e., the regular homotopy classes which represent its connected components. Our approach relies on unit quaternions to represent triangle orientations and to encode, in their parallel transport, the topology of the immersion. In unison with this theory we develop a computational apparatus based on a variational principle. Minimizing a non-linear Dirichlet energy optimally finds extrinsic geometry for the given intrinsic geometry and ensures low metric approximation error.
    We demonstrate our algorithm with a number of applications from mathematical visualization and art directed isometric shape deformation, which mimics the behavior of thin materials with high membrane stiffness.

May 9, 2019
Adam Wierman
Transparency and Control in Platforms & Networked Markets
    Platforms have emerged as a powerful economic force, driving both traditional markets, like the electricity market, and emerging markets, like the sharing economy. The power of platforms comes from their ability to tame the complexities of networked marketplaces -- marketplaces where there is not a single centralized market, but instead a network of interconnected markets loosely defined by a graph of feasible exchanges. Despite the power and prominence of platforms, the workings of platforms are often guarded secrets. Further, many competing platforms make very different design choices, but little is understood about the impact of these differing choices. In this talk, I will overview recent work from our group that focuses on reverse engineering the design of platforms and understanding the consequences of design choices underlying transparency in modern platforms. I will use electricity markets and ridesharing services as motivating examples throughout.

June 13, 2019
Babak Hassibi
Deep Learning and the Blessing of Dimensionality
    Stochastic descent methods have recently gained tremendous popularity as the workhorse for deep learning. So much so that, it is now widely recognized that the success of deep networks is not only due to their special deep architecture, but also due to the behavior of the stochastic descent methods used, which plays a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD)--originally developed for quadratic loss and linear models in the context of H-infinity control in the 1990's--and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. These minimax properties can be used to explain the convergence and implicit-regularization of stochastic descent methods in highly over-parametrized settings, exemplified by training a deep neural network. This observation gives some insight into why deep networks exhibit such powerful generalization abilities. It is also a further example of what is increasingly referred to as the "blessing of dimensionality". We also show how different variations of the algorithms can lead to different generalization performances and note some very counter-intuitive phenomena.

2017-18
CMI Faculty Lunches

December 14, 2017
Markus Meister
Neural Computations for Vision
    The task of the human visual system is to extract from the onslaught of raw image information (~1 Gbit/s) the little knowledge that really matters to us (~20 bit/s). I will discuss what we are learning about this process in the early neural circuits of the visual system, and how the resulting insights can help devise prostheses for the blind.

January 11, 2018
Laura Doval
Whether or not to open Pandora's box
    I study a single-agent sequential search problem as in Weitzman, "Optimal search for the best alternative." Contrary to Weitzman, conditional on stopping, the agent may take any uninspected box without first inspecting its contents. This introduces a new trade-off. By taking a box without inspection, the agent saves on its inspection costs. However, by inspecting it, he may discover that its contents are lower than he anticipated. I identify sufficient conditions on the parameters of the environment under which I characterize the optimal policy. Both the order in which boxes are inspected and the stopping rule may differ from that in Weitzman's model. Moreover, I provide additional results that partially characterize the optimal policy when these conditions fail.

February 8, 2018
Lior Pachter
MI in single-cell genomics
    

March 8, 2018
Federico Echenique
Markets equilibrium, fairness, and fractional allocation of discrete resources
    We propose to use endowments as a policy instrument in market design. Endow- ments give agents the right to enjoy certain resources. For example in school choice, one can ensure that low-income families have a shot at high-quality schools by endowing them with a chance of admission.
    We propose to use endowments as a policy instrument in market design. Endow- ments give agents the right to enjoy certain resources. For example in school choice, one can ensure that low-income families have a shot at high-quality schools by endowing them with a chance of admission.
    For the CMI lunch, I will introduce the model of competitive markets as a method to probabilistically allocate discrete resources, and point out some of limitations of competitive equilibria in this model. Then I will propose a new notion of fairness, trying to make sense of fairness among unequal agents, and a new competitive solution. Time permitting I'll outline some of the main results.

April 12, 2018
Richard Murray
Rapprochement Between Formal Methods and Control Theory
    In computer science, specifically software engineering and hardware engineering, formal methods are a particular kind of mathematically-based techniques for the specification, development, and verification of software and hardware systems. The field of control provides the principles and methods used to design engineering systems that maintain desirable performance by automatically adapting to changes in the environment. It turns out that both of these fields have been solving similar problems using different mathematical languages for the past 50 years or so. Mani Chandy tried to explain this to me starting about 20 years go, but it took me about 15 years to figure out what he was talking about and I've only come to truly appreciate the connections in the last five years or so (since the last time I gave a CMI talk on this subject). It turns out the ideas are pretty intuitive, so in this talk I will attempt to explain what I have learned to whoever shows up in about 45 minutes or so. We'll see how it goes. I promise not to mention sparsity, matrix factorization, or mutual information, and I will try to avoid too much use of boxes, diamonds, and double turnstyles.

May 10, 2018
Omer Tamuz
The Furstenberg-Poisson Boundary of Random Walks on Groups
    The Furstenberg-Poisson boundary of a random walk on a group captures the uncertainty in the walk's asymptotic behavior. I will define the boundary, give some examples, and explain how it reflects algebraic and geometric properties of groups, allowing us to study them using probabilistic tools. I will also present a recent result with Joshua Frisch, Yair Hartman and Pooya Vahidi Ferdowsi, which resolved a long-standing problem by giving a simple algebraic characterization of all groups for which there exists a random walk with a non-trivial boundary.

2016-17
CMI Faculty Lunches

October 13, 2016
Venkat Chandrasekaran
Learning Semidefinite Regularizers via Matrix Factorization
    Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties arising from ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available. Previous work under the title of 'dictionary learning' or 'sparse coding' may be viewed as learning a regularization function that can be computed via linear programming. We describe generalizations of these methods to learn regularizers that can be computed and optimized via semidefinite programming. Our approach for learning such semidefinite regularizers is based on computing structured factorizations of data matrices, and our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with operator analogs of Sinkhorn scaling. The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems. (Joint work with Yong Sheng Soh.)

November 10, 2016
John Doyle
Universal laws and architectures: brains, bugs, guts, hearts, nets, grids, and zombies
    We'll do some live demos to motivate new math explaining the extremes of human sensorimotor control, the laws and physiology that constrain it, and the architectures needed to achieve what remains possible. Time permitting we'll see how the same laws and architectures are universal in highly evolvable systems from bacterial cells to the Internet, and what tragic side effects so far seem unavoidable. There are lots of videos online, with links from http://www.cds.caltech.edu/~doyle

December 8, 2016
Thomas Vidick
Overlapping qubits
    I will discuss recent work on the problem of testing quantum devices. We focus on the basic problem of testing dimension.
    An ideal system of n qubits has 2^n dimensions, and this exponential scaling grants quantum computers their power. In a real system, however, the qubits will not be perfect. They can "overlap," in the sense that an operation on one qubit might slightly affect the others. We show that, allowing for slight overlaps, n qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be at most epsilon in n^O(1/\epsilon^2) dimensions.) Our results show that real systems with only apparently small imperfections could have much less power than one would have expected.
    On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.
    Joint work with Rui Chao, Chris Sutherland, and Ben Reichardt.

February 9, 2017
Joel Tropp
Sketchy decisions: Low-rank matrix optimization with optimal storage
    Convex matrix optimization problems with low-rank solutions play a fundamental role in signal processing, statistics, and related disciplines. These problems are difficult to solve because of the cost of maintaining the matrix decision variable, even though the low-rank solution has few degrees of freedom. This talk presents the first algorithm that provably solves these problems using optimal storage. The algorithm produces high-quality solutions to large problem instances that, previously, were intractable.
    Joint with Volkan Cevher, Roarke Horstmeyer, Quoc Tran-Dinh, Madeleine Udell, and Alp Yurtsever.

March 9, 2017
Babak Hassibi
Comparison Lemmas,
Convex Optimization and Phase Retrieval

    

April 13, 2017
Frederick Eberhardt
Causal Macro Variables
    Standard methods of causal discovery take as input a statistical data set of measurements of well-defined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? Often we have sensor level data but assume that the relevant causal interactions occur at a higher scale of aggregation. Sometimes we only have aggregate measurements of causal interactions at a finer scale. I will motivate the general problem of causal discovery and present recent work on a framework and method for the construction and identification of causal macro-variables that ensures that the resulting causal variables have well-defined intervention distributions. Time permitting, I will show an application of this approach to large scale climate data, for which we were able to identify the macro-phenomenon of El Nino using an unsupervised method on micro-level measurements of the sea surface temperature and wind speeds over the equatorial Pacific.


2015-16
CMI Faculty Lunches

October 15, 2015
Yisong Yue
The Dueling Bandits Problem
    In this talk, I will present the Dueling Bandits Problem, which is an online learning framework tailored towards real-time learning from subjective human feedback. In particular, the Dueling Bandits Problem only requires pairwise comparisons, which are shown to be reliably inferred in a variety of subjective feedback settings such as for information retrieval and recommender systems. I will provide an overview of the Dueling Bandits Problem with basic algorithmic results. I will then conclude by briefly discussing some recent results in ongoing research directions in applications to personalized medicine as well as theoretical connections to algorithmic game theory.
    This is joint work with Josef Broder, Bobby Kleinberg, and Thorsten Joachims.

November 12, 2015
Joel Tropp
Universality laws for randomized dimension reduction
    Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability.
    This talk describes a phase transition in the behavior of the dimension reduction map as the embedding dimension increases. The location of this phase transition is universal for a large class of datasets and random dimension reduction maps. Furthermore, the stability properties of randomized dimension reduction are also universal. These results have many applications in numerical analysis, signal processing, and statistics.
    Joint work with Samet Oymak.

December 10, 2015
John Preskill
Is spacetime a quantum error-correcting code?
    Two of the most amazing ideas in physics are the holographic principle and quantum error correction. The holographic principle asserts that all the information contained in a region of space is encoded on the boundary of the region, albeit in a highly scrambled form. Quantum error correction is the foundation of our hope that large-scale quantum computer can be operated to solve hard problems. I will argue that these two ideas are more closely related than had been previously appreciated.
    I'll describe quantum codes that realize the holographic principle, where the bulk information encoded on the boundary is well protected against errors in which portions of the boundary are removed. These codes provide simplified "toy models" of quantum spacetime, opening new directions in the study of quantum gravity, though many questions remain.
    Reference: http://arxiv.org/abs/1503.06237

January 14, 2016
Venkat Chandrasekaran
Finding Planted Subgraphs using the Schur-Horn Relaxation
    Extracting structured planted subgraphs from large graphs is a fundamental question that arises in a range of application domains. We describe a computationally tractable approach based on convex optimization to recover certain families of structured graphs that are embedded in larger graphs containing spurious edges. Our method relies on tractable semidefinite descriptions of majorization inequalities on the spectrum of a matrix, and we give conditions on the eigenstructure of a planted graph in relation to the noise level under which our algorithm succeeds. (Joint work with Utkan Candogan.)

March 10, 2016
Steven Low
Online optimization of power networks
    We are at the cusp of a historical transformation of our power systems into a more sustainable, dynamic, intelligent, and distributed form. We are interested in the optimization and control of a large network of distributed energy resources in a plug-and-play environment of the future. This requires solving power flow equations which is well-known to be hard. The grid, however, solves them in real-time exactly at scale, and we propose to exploit it explicitly to carry out part of our optimization algorithm. This approach not only improves scalability, but also naturally adapts to evolving network conditions. In this talk, we present two examples.
    The first example presents an online algorithm to solve an optimal power flow problem at a slow timescale on a radial network where the controllable devices continuously interact with the network that implicitly computes a power flow solution given a control action. Collectively these devices and the network implement a gradient projection algorithm in real time. We prove that the proposed algorithm converges to a set of local optima and provide sufficient conditions under which it converges to a global optimum. We derive an upper bound on the suboptimality gap of any local optimum. This bound suggests that any local optimum is almost as good as any strictly feasible point.
    In the second example, the online algorithm integrates primary frequency regulation, secondary frequency regulation, and congestion management at a fast timescale. The algorithm is distributed in that it requires communication only between neighboring buses. Collectively, the controllable devices and the swing dynamics of the network implement a primal-dual algorithm to rebalance power, restore nominal frequency and inter-area flows, and enforce line limits at a minimum control cost. We prove sufficient conditions under which the algorithm converges to a global optimum.
    (Joint work with Changhong Zhao, Enrique Mallada, Lingwen Gan)

April 14, 2016
Michelle Effros
Cooperation for Communication
    In this talk, I will tell a little bit of the unfolding story of cooperation. Using simple models to capture the cost and benefit of cooperation between communicating devices, we seek to understand whether, where, and by how much the benefit of cooperation can exceed its cost.

May 12, 2016
Babak Hassibi
Simple Algorithms and Guarantees for Matrix Completion over GF(2)
    Let X be an n_1 by n_2 matrix with entries in GF(2) and rank r < min (n_1 , n_2). We consider the problem of reconstructing X given only a subset of its entries. This problem has recently found numerous applications, most notably in network and index coding, where finding optimal linear codes (over some finite field GF(q), say) can be reduced to finding the minimum rank completion of a matrix with a subset of revealed entries. The problem of matrix completion over reals also has many applications and in recent years several polynomial-time algorithms with provable recovery guarantees have been developed. However, to date, such algorithms do not exist in the finite-field case. We propose a linear algebraic algorithm, based on inferring low-weight relations among the rows and columns of X to attempt to complete X based on a random subset of its entries. We establish conditions under which the algorithm runs in polynomial time and can successfully complete X with high probability from a vanishing fraction of its entries. We then propose a linear programming-based extension of our basic algorithm, and evaluate it empirically. Our simulations suggest that this linear programming-based method can successfully complete n-by-n random rank r matrices (for n<100 and r<10) from at most 3 log_2 N_{n,r} entries, where N_{n,r} is the number of n-by-n rank r matrices with elements in GF(2).
    This is joint work with James Saunderson and Maryam Fazel.

2014-15
CMI Faculty Lunches

June 4, 2015
Katrina Ligett
Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard
    One of the most appealing aspects of the (coarse) correlated equilibrium concept is that natural dynamics quickly arrive at approximations of such equilibria, even in games with many players. In addition, there exist polynomial-time algorithms that compute exact (coarse) correlated equilibria. In light of this, a natural question is how good are the (coarse) correlated equilibria that can arise from any efficient algorithm or dynamics.
    In this talk, we'll discuss this question. Our main result is strongly negative: we show that in multiplayer games that have a succinct representation, it is NP-hard to compute any coarse correlated equilibrium (or approximate coarse correlated equilibrium) with welfare strictly better than the worst possible. The focus on succinct games ensures that the underlying complexity question is interesting; many multiplayer games of interest are in fact succinct. Our results imply that, while one can efficiently compute a coarse correlated equilibrium, one cannot provide any nontrivial welfare guarantee for the resulting equilibrium, unless P=NP. We show that analogous hardness results hold for correlated equilibria, and persist under the egalitarian objective or Pareto optimality.
    To complement the hardness results, we'll briefly discuss some positive news--we develop an algorithmic framework that identifies settings in which we can efficiently compute an approximate correlated equilibrium with near-optimal welfare. We use this framework to develop an efficient algorithm for computing an approximate correlated equilibrium with near-optimal welfare in aggregative games.
    Joint work with Siddharth Barman.

Apr 9, 2015
Leonard Schulman
Red Queen Dynamics
    I'm going to talk about work in progress about how natural selection may force diversity, and what this has to do with games, multiplicative weight updates, and duality gaps.

Mar 12, 2015
Richard Murray
Analysis, Design and Prototyping of Biomolecular Feedback Systems
    Advances over the past decade have given biological engineers new insights into the role of genetic circuits in nature and the design of biomolecular circuits to implement biological operations in vitro and in vivo. In this talk I will discuss the use of concepts from systems and control engineering as applied to the analysis and design of biological feedback circuits. After a brief survey of relevant concepts from synthetic biology, I will present some recent results that combine modeling, analysis, design and experimental implementation of biological feedback circuits. These results include the role of redundant biological pathways for implementing robust decision-making strategies in cells, the use of biomolecular "breadboards" for prototyping and debugging engineered biomolecular circuits, and the implementation of circuits for regulation of gene expression and biomolecular event detection. Using these results as examples, I will discuss some of the open problems and research challenges in the area feedback control using biological circuits.

Feb 12, 2015
Venkat Chandrasekaran
Relative Entropy Relaxations for Signomial Optimization
    Due to its favorable analytical properties, the relative entropy function plays a prominent role in a variety of contexts in information theory and in statistics. In this talk, I'll discuss some of the beneficial computational properties of this function by describing a class of relative-entropy-based convex relaxations for obtaining bounds on signomials programs (SPs), which arise commonly in many problems domains. SPs are non-convex in general, and families of NP-hard problems can be reduced to SPs. The central idea underlying our approach is a connection between the relative entropy function and efficient proofs of nonnegativity via the arithmetic-geometric-mean inequality.

Jan 8, 2015
Adam Wierman
Data Centers, Energy, and Online Optimization

Dec 11, 2014
Victoria Kostina
Fixed- and variable-length data compression at finite blocklength
    In data compression, a block of source symbols is compressed down into a smaller length string, while ensuring that the original block can be recovered from that encoded string, either exactly or approximately. Identifying the optimum rate-fidelity tradeoffs achievable in principle, regardless of the complexity of the code, is the subject of information-theoretic interest in data compression.
    Classical information theory characterizes rate-fidelity tradeoffs achievable in the limit of infinite length of the source block to be compressed. A perennial question in information theory is how relevant the asymptotic fundamental limits are when the communication system is forced to operate at a given fixed blocklength. The finite blocklength (delay) constraint is inherent to all communication scenarios. In fact, in many systems of current interest, such as real-time multimedia communication, delays are strictly constrained, while in packetized data communication, packets are frequently on the order of 1000 bits. Despite an obvious practical interest in the nonasymptotic fundamental limits, up until recently, such limits were widely believed to defy analysis.
    We show that in both fixed- and variable-length data compression, the nonasymptotic fundamental limit can be closely approximated by a function of just two parameters of the source, namely, the rate-distortion function (Shannon's asymptotic limit), and the rate-dispersion function, a new parameter of the source that characterizes the nonasymptotic behavior of the minimal achievable rate.
    In block-to-block data compression, a source block of k symbols is compressed down to n symbols, and the tradeoff between k, n and the achievable fidelity of reproduction is of interest. For memoryless sources, we show that there is a penalty over the rate-distortion function induced by the finite blocklength requirement, and the size of that penalty is closely governed by the rate-dispersion function. In other words, in block data compression, the rate-dispersion function measures the speed of approach to the rate-distortion function for a given source.
    In fixed-to-variable-length data compression, a block of k source outcomes is represented by a variable-length binary string. The nonasymptotic fundamental limit of interest is the minimum average length of the encoded string compatible with a given fidelity. Perhaps surprisingly, the behaviors of the fundamental limits in fixed- and variable-length settings are starkly different. First of all, in the variable-length setting, the strong converse does not hold, and allowing a nonvanishing, however small, excess distortion probability, one can asymptotically beat the rate-distortion function. Moreover, this asymptotic limit is approached from below , i.e. the shorter the blocklength the lower the achievable average rate! Still, like in fixed-length data compression, only two parameters of the source are required to give an accurate description of that counter-intuitive behavior: its rate-distortion function and its rate-dispersion function.

Nov 12, 2014
Babak Hassibi
Comparison Lemmas: From Slepian to Non-Smooth Optimization
    In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from (possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. I will describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). The genesis of the theory can be traced back to an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes).

Oct 9, 2014
Yisong Yue
Balancing the Explore/Exploit Tradeoff in Interactive Structured Prediction


2012-13 CMI Faculty Lunches

May 9, 2013
Chris Shannon

April 11, 2013

March 14, 2013

February 14, 2013

January 10, 2013

December 13, 2012
Leonard Schulman

November 8, 2012
Chris Umans - Fast Matrix Multiplication Using Coherent Configurations

October 11, 2012
Tracey Ho - Distributed Storage and Online Streaming as a Resource Allocation Problem


2011-12 CMI Faculty Lunches

May 10, 2012
Adam Wierman

April 12, 2012
Federico Echenique

March 8, 2012
Mani Chandy

February 2, 2012
Venkat Chandrasekaran

January 12, 2012
Richard Murray


2010-11 CMI Faculty Lunches

January 13, 2011
Joel Tropp


2008-09 CMI Faculty Lunches

June 4, 2009
Jerry Marsden

May 21, 2009
Richard Murray

April 23, 2009
Joel Tropp

February 19, 2009
Erik Winfree

January 15, 2009
Adam Wierman

December 11, 2008
Emmanuel Candes

November 13, 2008
Gary Lorden

October 16, 2008
Yaser Abu-Mostafa


2007-08 CMI Faculty Lunches

May 15, 2008
Babak Hassibi

April 24, 2008
Mani Chandy

April 3, 2008
Tracey Ho

March 20, 2008
Mani Chandy

February 28, 2008
Mathieu Desbrun

January 17, 2008
Michelle Effros

November 29, 2007
Chris Umans

October 25, 2007
Peter Schröder


2006-07 CMI Faculty Lunches

April 19, 2007
John Doyle

March 1, 2007
Federico Echenique

February 15, 2007
Robert McEliece

January 18, 2007
Richard Murray

December 14, 2006
Federico Echenique

November 16, 2006
Leonard Schulman


2005-06 CMI Faculty Lunches

May 25, 2006
Erik Winfree

April 20, 2006
Jerry Marsden

March 16, 2006
Babak Hassibi

February 16, 2006
Emmanual Candes

January 26, 2006
Mani Chandy

December 8, 2005
Chris Umans

November 21, 2005
Tracey Ho (Joint with Lee Center)

October 20, 2005
Peter Schröder


2004-05 CMI Faculty Lunches

June 2, 2005
PP Vaidyanathan

May 12, 2005
Gary Lorden

April 21, 2005
Pietro Perona

March 10, 2005
Yaser Abu-Mostafa

February 10, 2005
Mathieu Desbrun

December 9, 2004
Richard Murray

November 18, 2004
Federico Echenique

October 25, 2004
Babak Hassibi (joint w. Lee Center)

October 14, 2004
Emmanuel Candes


2003-04 CMI Faculty Lunches

June 10, 2004

Bob McEliece

May 27, 2004
Leonard Schulman

May 20, 2004
Matt Jackson

April 22, 2004
Chris Umans

April 8, 2004
Jerry Marsden

March 11, 2004
Peter Schröder

February 26, 2004
Erik Winfree
(Parsons-Gates 112)

February 12, 2004
Michelle Effros
(Avery Library)