Home | People | Events | Links | Positions

Mathematics of Information
Seminar    (CS286)

4:00-5:00 pm in Annenberg 314
(except where otherwise noted)

October 1, 2019
Jingcheng Liu
Approximate counting, phase transitions & geometry of polynomials
    In classical statistical physics, a phase transition is understood by studying the geometry (the zero-set) of an associated polynomial (the partition function). In this talk I will show that one can exploit this notion of phase transitions algorithmically, and conversely exploit the analysis of algorithms to understand phase transitions.
    As applications, I will give efficient deterministic approximation algorithms (FPTAS) for counting q-colorings, and for computing the partition function of the Ising model.
    This talk is fully self-contained, and based on joint work with Alistair Sinclair and Piyush Srivastava.

October 8, 2019
Jingcheng Liu
Uniform Sampling Through the Lovász Local Lemma
    We propose a new algorithmic framework, called partial rejection sampling, to draw samples exactly from a product distribution conditioned on avoiding a set of bad events. Our framework builds new connections between the variable framework of the Lovász Local Lemma and some classical sampling algorithms such as the cycle-popping algorithm for rooted spanning trees. Among other applications, we discover new algorithms to sample (weighted) independent sets, and satisfying assignments of k-CNF formulas with bounded variable occurrences.
    Based on joint work with Heng Guo and Mark Jerrum.

October 15, 2019
Grigory Yaroslavtsev
Advances in Linear Sketching over Finite Fields
    I will describe recent advances in linear sketching over finite fields and its relationship to log-rank conjecture and (quantum) communication complexity. For a function f: Z_p^n -> R a linear sketch modulo p is a distribution A over k x n matrices with entries in Z_p which for every x \in Z_p^n allows one to compute f(x) given access only to Ax. An important example here are binary sketches (corresponding to p = 2). I will describe recent structural results about the smallest dimension of k which enables linear sketching using the tools of Fourier analysis and communication complexity. In particular, if f is an XOR-function linear sketching over Z_2 turns out to be the optimal tool for the design of two-player one-way communication protocols (under the uniform distribution of x) as well as multi-player one-way broadcasting protocols with Omega(n) players (for any distribution of x).
    Based on joint works with Kannan, Mossel and Sanyal (CCC'18), Hosseini and Lovett (CCC'19) and Zhou (RANDOM'19)"

October 22, 2019
Guannan Qu
Distributed optimization in multi-agent systems: bridging the gap between centralized and distributed algorithms
    We consider the distributed optimization problem over a network of agents, where the objective is to optimize a global function formed by a sum of local functions, with each agent only allowed to use local information and local communication with neighbors. This is a problem of fundamental interest in multi-agent systems and has found various applications in sensor networks, multi-robot systems, distributed machine learning, etc. To solve this problem, various distributed gradient methods have been proposed, but in this talk, we show there is a gap in convergence rate between the existing distributed gradient algorithms and the optimal centralized gradient algorithm. We then develop a gradient tracking technique that results in two algorithms that can partially bridge the gap. In particular, one of our algorithms achieves Nesterov-type acceleration in the realm of distributed optimization for the first time. Based on joint work with Na Li.

October 29, 2019
Guannan Qu
Exponential stability of primal-dual gradient dynamics and its applications in network control
    Primal-dual gradient dynamics (PDGD) that find a saddle point of a Lagrangian of an optimization problem have been widely used in systems and control. In these applications, it is important for such dynamics to have strong stability guarantees, like global exponential stability. However, while the global asymptotic stability of PDGD has been well-studied, it is less studied whether PDGD is globally exponentially stable.
    In this talk, we study the primal-dual gradient dynamics for constrained convex optimization and provide conditions that guarantee its global exponential stability. We also present its applications in a smart grid control problem.

November 5, 2019
☆ NOTE DIFFERENT TIME: 3:00-4:00PM
Dean Doron
Nearly Optimal Pseudorandomness From Hardness
    Existing proofs that deduce P=BPP from circuit lower bounds convert randomized algorithms to deterministic ones with a large polynomial slowdown in running time. In this talk, I will show that if we assume exponential lower bounds against nondeterministic circuits, we can convert any randomized algorithm running in time T to a deterministic one running in time T^{2+α} for an arbitrarily small constant α. Under complexity-theoretic assumptions, such a slowdown is nearly optimal.
    Our result follows from a new pseudorandom generator whose construction uses, among other ideas, a new connection between pseudoentropy generators and locally list-recoverable codes.
    Joint work with Dana Moshkovitz, Justin Oh and David Zuckerman

November 12, 2019
Eliza O'Reilly
Poisson hyperplane tessellations in high dimensions and one-bit compression
    In this talk, we will discuss random tessellations of R^n induced by stationary Poisson hyperplanes as the dimension n tends to infinity. We consider metrics of the cells of the tessellation motivated by the data compression method used in one-bit compressed sensing. That is, assuming data in R^n is compressed using one bit with respect to each hyperplane, it is determined how the intensity of the hyperplanes must scale with n to ensure either sufficient separation of different data or sufficient proximity of data compressed together. This study leads to new insights on the nature of the cells of these tessellations in high dimensions and a connection with random polytopes defined as the convex hull of independent and identically distributed random points. Based on joint work with François Baccelli.

November 19, 2019
Eliza O'Reilly
Modeling Repulsion with Determinantal Point Processes
    Determinantal point processes (DPPs) are a useful probabilistic model of point configurations exhibiting repulsion between points. They appear in random matrix theory and have found many applications including in machine learning for selecting diverse subsets. We will discuss the appealing properties of DPPs as well as a procedure for their simulation. I will then present a coupling result characterizing the repulsive effect of a point in a DPP, based on joint work with Jesper Møller.


2019-20
CMI Faculty Lunches

by invitation


April 11, 2019
Joel Tropp
SketchySVD
    This talk asserts that randomized linear algebra is a natural tool for on-the-fly compression of data matrices that arise from large-scale scientific simulations and data collection.
    The technical contribution consists in a new algorithm for constructing an accurate low-rank approximation of a huge matrix from streaming data. Among other applications, we show how the SVD of a large-scale sea surface temperature dataset exposes features of the global climate.

November 14, 2019
John Doyle
Universal laws and architectures in complex networked systems with applications to sensorimotor control
    This talk will describe new theory that addresses research challenges central to the understanding (and design) of complex network architectures in neuroscience, as well as cell biology, technology, medicine, and social systems. (We'll also hopefully highlight a rationale for the ubiquity of Effros' "outformation" in complex networks.)

April 09, 2020
Richard Murray
Can we really use machine learning in safety critical systems?
    Machine learning has the promise of revolutionizing our approach to many problems in which computers have typically not performed well (such as image classification, speech recognition, and related problems). It is being extended and applied to systems in which these problems are part of a much larger set of complex decision-making tasks, often in safety- and mission-critical systems. In this talk I will discuss the methods used in engineering applications to provide high confidence in aerospace and other systems, and the challenges of adopting those techniques to include "machine learned" components. Along the way, I will talk about the scary-field of self-driving cars, where some of these methods are being tried out in the real world (ready or not...), and some of the techniques we are developing in my group, which will hope will eventually help engineers design safer systems (with or without ML).