Upcoming Events
Mathematics of Information seminar (CS286)
4:005:00 pm in Annenberg 314 (except where otherwise noted)
October 24, 2017
TBA
October 31, 2017
TBA
November 7, 2017
TBA
November 14, 2017
TBA
November 21, 2017
TBA
November 28, 2017
TBA
December 5, 2017
TBA
December 12, 2017
TBA

201718 CMI Faculty Lunches
by invitation
October 13, 2016
Venkat Chandrasekaran
Learning Semidefinite Regularizers via Matrix Factorization
Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties arising from illposedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimizationbased approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domainspecific expertise. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available. Previous work under the title of 'dictionary learning' or 'sparse coding' may be viewed as learning a regularization function that can be computed via linear programming. We describe generalizations of these methods to learn regularizers that can be computed and optimized via semidefinite programming. Our approach for learning such semidefinite regularizers is based on computing structured factorizations of data matrices, and our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with operator analogs of Sinkhorn scaling. The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems. (Joint work with Yong Sheng Soh.)
November 10, 2016
John Doyle
Universal laws and architectures: brains, bugs, guts, hearts, nets, grids, and zombies
We'll do some live demos to motivate new math explaining the extremes of human sensorimotor control, the laws and physiology that constrain it, and the architectures needed to achieve what remains possible. Time permitting we'll see how the same laws and architectures are universal in highly evolvable systems from bacterial cells to the Internet, and what tragic side effects so far seem unavoidable. There are lots of videos online, with links from http://www.cds.caltech.edu/~doyle
December 8, 2016
Thomas Vidick
Overlapping qubits
I will discuss recent work on the problem of testing quantum devices. We focus on the basic problem of testing dimension.
An ideal system of n qubits has 2^n dimensions, and this exponential scaling grants quantum computers their power. In a real system, however, the qubits will not be perfect. They can "overlap," in the sense that an operation on one qubit might slightly affect the others. We show that, allowing for slight overlaps, n qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be at most epsilon in n^O(1/\epsilon^2) dimensions.) Our results show that real systems with only apparently small imperfections could have much less power than one would have expected.
On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.
Joint work with Rui Chao, Chris Sutherland, and Ben Reichardt.
February 9, 2017
Joel Tropp
Sketchy decisions: Lowrank matrix optimization with optimal storage
Convex matrix optimization problems with lowrank solutions play a fundamental role in signal processing, statistics, and related disciplines. These problems are difficult to solve because of the cost of maintaining the matrix decision variable, even though the lowrank solution has few degrees of freedom. This talk presents the first algorithm that provably solves these problems using optimal storage. The algorithm produces highquality solutions to large problem instances that, previously, were intractable.
Joint with Volkan Cevher, Roarke Horstmeyer, Quoc TranDinh, Madeleine Udell, and Alp Yurtsever.
March 9, 2017
Babak Hassibi
Comparison Lemmas, Convex Optimization and Phase Retrieval
April 13, 2017
Frederick Eberhardt
Causal Macro Variables
Standard methods of causal discovery take as input a statistical data set of measurements of welldefined causal variables. The goal is then to determine the causal relations among these variables. But how are these causal variables identified or constructed in the first place? Often we have sensor level data but assume that the relevant causal interactions occur at a higher scale of aggregation. Sometimes we only have aggregate measurements of causal interactions at a finer scale. I will motivate the general problem of causal discovery and present recent work on a framework and method for the construction and identification of causal macrovariables that ensures that the resulting causal variables have welldefined intervention distributions. Time permitting, I will show an application of this approach to large scale climate data, for which we were able to identify the macrophenomenon of El Nino using an unsupervised method on microlevel measurements of the sea surface temperature and wind speeds over the equatorial Pacific.
