Home | People | Events | Links | Positions


2015-16 CMI Seminars

October 27, 2015
Piyush Srivastava
Stability of Causal Inference
    Consider a system consisting of a binary "stimulus" variable X, (e.g. whether an individual consumes tobacco), a binary "response" variable Y, (e.g. whether the individual develops a particular disease) and a "hidden" confounding variable U, (e.g. a hypothetical genetic factor that affects the individual's propensity to both consume tobacco as well as to develop the disease). The system can be modeled by a directed graphical model with X, Y and U as nodes, and U ? X, U ? Y and X ? Y as edges. Given P, the observed probability distribution of the pair (X, Y), we ask: what is the causal effect of X on Y? In the early 1990s, Pearl proposed that an appropriate formalization of the above question is to compute the conditional probability of Y given X in a different graphical model in which the incoming edges to X have been deleted, given only the observed distribution P on the original model. (In the above example, this new model will only have the edges X ? Y and U ? Y.) This deletion simulates an experimental intervention on X while not making any changes to the other interactions in the model. In the general case, the graphical model can comprise several other nodes, and X and Y themselves might be disjoint subsets of these nodes instead of being singleton vertices.
    It is not hard to see that the above problem is not solvable for all graphical models and all pairs X and Y of subsets of nodes. However, a long of line of work by several researchers culminated in 2006 in a complete algorithmic characterization of graphical models in which the problem is solvable. Further, this characterization also included an algorithmic procedure which takes the observed distribution and outputs the requisite conditional distribution in those cases where the problem is solvable [Huang and Valtorta, 2006 and Shpitser and Pearl, 2006].
    These characterizations assumed an exact knowledge of both the underlying graphical model and the observed distribution P. However, in practice, we may expect imprecision both in our description of the model (which may arise from failing to include an edge that should be present), and in our knowledge of the observed distribution P (which may arise due to round-off errors in storage and due to statistical noise). In our work, we study the stability of the above causal inference procedures with respect to such imprecisions in the input.
    On the positive side, we show that the effect of removing an edge on the accuracy of these procedures is not large, as long as the removed edges are themselves "weak"; in particular, the error introduced in the output is proportional to the "strength" of the edge for a natural measure of strength. We also show that examples solvable by some sound but "incomplete" algorithms that preceded the complete algorithm described above are well-conditioned with respect to errors in the observed distribution P.
    On the negative side, we construct examples solvable by the complete algorithm described above that are remarkably ill-conditioned. More formally, we give a sequence of instances such that for any algorithm that solves the causal inference problem on an instance with n nodes, the relative error in the output is larger than the relative error in the input by a factor that is at least O(exp(n^{1/3})).
    Joint work with Leonard J. Schulman.

November 13, 2015
NOTE: 12:00 noon in Annenberg 314
Gil Cohen
Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs
(part 1 of 2)
    In his 1947 paper that inaugurated the probabilistic method, Erdös proved the existence of 2 log(n)-Ramsey graphs on n vertices. Matching Erdös' result with a constructive proof is a central problem in combinatorics that has gained a significant attention in the literature. In this talk we will present a recent work towards this goal (http://eccc.hpi-web.de/report/2015/095/).
    No prior knowledge will be assumed.

November 24, 2015
Gil Cohen
Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs
(part 2 of 2)

December 1, 2015
Madeleine Udell
Generalized Low Rank Models
(part 1 of 2)
    Generalized Low Rank Models (GLRM) extend the idea of Principal Components Analysis (PCA) to embed arbitrary data tables consisting of numerical, Boolean, categorical, ordinal, and other data types into a low dimensional space. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously.
    In these two talks, we introduce the GLRM framework with an eye towards open problems, spending time at the interface between practical large scale data analysis and statistical and optimization theory. We'll discuss     

  • what can you model as a GLRM?
  • what can they be used for?
  • when can we trust them?
  • how can we fit them?
  • what kinds of information or control might be needed to do better?

    The first talk in the series focuses on understanding the modeling flexibility and large scale data analysis applications for low rank models, and on the connecting theoretical statistical results with the practical setting of data analysis.
    The second talk delves further into optimization theory, and into recent results using these ideas in a "bandit" setting in which data table entries can be actively queried.

December 8, 2015
Madeleine Udell
Generalized Low Rank Models
(part 2 of 2)
NOTE: location change to Annenberg 314

March 29, 2016
Hu Fu
Towards simple, robust and approximately optimal auctions
(part 1 of 2)
    In these two talks I will describe recent developments in the design and analysis of simple and robust auctions that provably extract near optimal revenue. In the first talk I will talk about basics of optimal auction design and give several proofs of Myerson's seminal result from 1981, offering different perspectives on this classic result. Some of the proofs lead to recent generalizations. In the second talk I will talk about some recent works on prior independent and simple form auctions.

April 5, 2016
Hu Fu
Towards simple, robust and approximately optimal auctions
(part 2 of 2)
    

April 12, 2016
Benny Chor
Conservation Profiles, Functional Enrichment, and the Tree of Life
    Analysis of proteins and genes based solely on their sequences is one of tools employed successfully in the early days of bioinformatics. Conservation of a gene or protein across different species is usually defined as the corresponding sequences being highly similar. Such sequence conservation usually implies a similar function.
    In this work, we explore conservation, based on a very weak notion: the existence of a minimal similarity among sequences. We study this question in the context of 18 species, including mammals and insects. We show that even this minimal requirement yields interesting observations relating conservation, function, and the evolutionary history as represented by the tree of life.
    The main tools we use are enrichment analysis, based on the hypergeometric distribution, and clustering.
    The talk will be self contained.
    Joint work with Jonathan Witztum, Erez Persi, David Horn, and Metsada Pasmanik-Chor.

April 26, 2016
Anand Kumar Narayanan
Computing Isomorphisms between Finite Fields using Elliptic Curves
(part 1 of 2)
    Given a prime power p^n, a finite field of cardinality p^n may be constructed as the quotient of the ring of polynomials over the prime order field Z/pZ modulo an irreducible polynomial of degree n. This construction however is non canonical, for all known (unconditional) algorithms for finding irreducible polynomials are randomized. Irrespective of the choice of irreducible polynomial, one obtains the "same" field since two finite fields of the same cardinality are isomorphic. This poses the following algorithmic problem in several applications: Given two irreducible polynomials of degree n over Z/pZ, compute an isomorphism between the respective finite fields obtained modulo them.
    We begin by surveying the fastest known (randomized) algorithm for the problem, which has run time quadratic in n. We then present a new (randomized) algorithm based on elliptic curve isogenies with run time sub quadratic in n for most n. The crux of our approach is finding pre-images under the Lang map on elliptic curves over finite fields. We conclude by posing an open computational problem concerning Lang maps whose resolution would solve the isomorphism problem with run time linear in n. The discussion will be self contained and familiarity with elliptic curves, while helpful, is not required.

May 3, 2016
Anand Kumar Narayanan
Computing Isomorphisms between Finite Fields using Elliptic Curves
(part 2 of 2)

2014-15 CMI Seminars

April 7 & 14, 2015
Enrique Mallada
Nonlinear Linear Systems Analysis Applied to Networked Dynamical Systems
    This two-part seminar will overview several traditional and novel analysis tools for stability and control of nonlinear dynamical systems. Topics to be covered will include Lyapunov Theory, Passivity and Monotonicity, as well as the differential versions of it. Emphasis will be made on the limitations of each technique and how these limitations affect their application spectrum.
    Throughout the talks I will describe the importance of these tools in the study of networked dynamical systems. In particular, I will use a canonical model for synchronization of networked oscillators as a motivating example, and describe how to apply these tools in their study. Along the way, I will highlight a few commonly incurred mistakes and the possible workarounds.

Feb 24, 2015
Madeleine Udell
Generalized Low Rank Models
    Principal components analysis (PCA) is a well-known technique for approximating a data set represented by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix completion, sparse and robust PCA, k-means, k-SVD, and maximum margin matrix factorization. The method handles heterogeneous data sets, and leads to coherent schemes for compressing, denoising, and imputing missing entries across all data types simultaneously. It also admits a number of interesting interpretations of the low rank factors, which allow clustering of examples or of features.     We propose several parallel algorithms for fitting generalized low rank models, and describe implementations and numerical results.

Feb 10 & 17, 2015
Krishnamurthy Dvijotham (Dj)
Tools from Optimization Theory and Applications to Nonlinear Dynamical Systems
    The lectures would be an overview of various tools from optimization and control theory and applications of these ideas to studying properties of nonlinear dynamical systems. A list of topics to be covered includes:
    1) Quadratically Constrained Quadratic Programs (QCQPs), a class of optimization problems that are known to be hard in general. We will cover various special cases under which QCQPs are known to be efficiently solvable, and describe the relationship of these results to various "hidden convexity" results.
    2) Modern tools from nonlinear control theory: Integral Quadratic Constraints (IQCs), Contraction and Differential Positivity.
    3) Applications of these ideas to analyzing properties of nonlinear dynamical systems, including some applications to studying properties of numerical algorithms.
References:
1) Pólik, Imre, and Tamás Terlaky. "A survey of the S-lemma." SIAM review 49.3 (2007): 371-418.
2) Lessard, Laurent, Benjamin Recht, and Andrew Packard. "Analysis and design of optimization algorithms via integral quadratic constraints." arXiv preprint arXiv:1408.3595 (2014).
3) Forni, Fulvio, and Rodolphe Sepulchre. "Differentially positive systems." arXiv preprint arXiv:1405.6298 (2014).

Jan 20 & 27, 2015
Piyush Srivastava
Zeros of polynomials and their applications
    A multivariate polynomial over the complex numbers is said to be stable with respect to a region H if it does not vanish when its arguments lie in H. An important special case is when H is the product of unit disks in the complex plane (a "polydisk"), in which case stability with H is equivalent to the statement that the polynomial is non-zero when all its arguments lie in the unit disk. Another important case is when H is the product of upper half planes, in which case stability w.r.t H is equivalent to the statement that the polynomial is non-zero when all its arguments have positive imaginary parts.
    While the term "stability" itself originated in control theory, stable polynomials and their properties have had several applications in fields as diverse as linear algebra, combinatorics and probability theory, and computational complexity theory, among others. In these two talks, I will attempt to discuss the following three applications:
    1) #P-hardness of computing the magnetization of the Ising model.
    This requires an extension to the Lee-Yang theorem -- a stability theorem w.r.t the unit disk for the partition function of the Ising model that was itself proven in order to study phase transitions.
    2) Exact characterization of probabilities for which the Lovász local lemma is valid.
    This uses a statement about the stability of the independent set polynomial w.r.t a polydisk. We will discuss a related question regarding the effective verification of stability properties and its possible implications for a different proof of the Lee-Yang theorem.
    3) Negative association for probability distributions on spanning trees.
    Here, one uses the stability of the generating function of the given probability distribution w.r.t to the upper half plane (also known as the "strongly Raleigh" property in this context). This connection was used in a breakthrough result of Oveis-Gharan, Saberi and Singh [O-GSS11] on approximate solutions to the Traveling Salesperson Problem.

(1) is based on joint work with Alistair Sinclair [SS14]. (2) is based on a paper of Scott and Sokal [SS05] and some ongoing joint work with Mario Szegedy. (3) is based on papers by Borcea, Brändén and Liggett [BBL09] and Oveis-Gharan, Saberi and Singh [O-GSS11].

References:
[BBL09] Borcea, J., Brändén, P. & Liggett, T. Negative dependence and the geometry of polynomials. J. Amer. Math. Soc. 22, 521-567 (2009).
[O-GSS11] Oveis-Gharan, S., Saberi, A. & Singh, M. A Randomized Rounding Approach to the Traveling Salesman Problem. In Proc. IEEE Annual Symposium on Foundations of Computer Science (FOCS) 550-559 (2011).
[SS05] Scott, A. D. & Sokal, A. D. The Repulsive Lattice Gas, the Independent-Set Polynomial, and the Lovász Local Lemma. J Stat Phys 118, 1151-1261 (2005).
[SS14] Sinclair, A. & Srivastava, P. Lee-Yang theorems and the complexity of computing averages. Comm Math Phys 329 (3), 827-858 (2014). Extended abstract in Proc. ACM Annual Symposium on Theory of Computing (STOC) 2013, 625-634.

Jan 6, 2015
Ruta Mehta
Leontief Exchange Markets Can Solve Multivariate Polynomial Equations, Yielding FIXP and ETR Hardness
    We show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production, thereby settling open questions of Vazirani and Yannakakis (2009). In both cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be "yes'' instances. If all instances are under consideration, then in both cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals.
    The main technical part of our result is the following reduction:
    Given a set 'F' of simultaneous multivariate polynomial equations in which the variables are constrained to be in a closed bounded region in the positive orthant, we construct a Leontief market 'M' which has one good corresponding to each variable in 'F'. We prove that the equilibria of 'M', when projected onto prices of these latter goods, are in one-to-one correspondence with the set of solutions of the polynomials. (Joint work with J. Garg, V. V. Vazirani, S. Yazdanbod.)

2013-14 CMI Seminars

Nov 25 & Dec 2, 2014
Siddharth Barman
An Approximate Version of Caratheodory's Theorem and Its Algorithmic Applications
    In this talk I will present an approximate version of Caratheodory's theorem and its algorithmic applications. In particular, I will show that for every vector in the convex hull of a set of vectors X there exists a nearby (under p-norm distance, for p at least 2) vector that can be expressed as a convex combination of at most b vectors of X, where the bound b is independent of the dimension of the vectors.
    Given the significance of Caratheodory's theorem, this approximate version is interesting in its own right. It also has notable algorithmic applications. Specifically, I will describe how this approximate version of Caratheodory's theorem leads to an algorithm for computing near optimal solutions of sparse bilinear and quadratic forms over the simplex. This algorithm, in particular, provides a polynomial-time approximation scheme for Nash equilibrium in games where the sum of the payoff matrices is sparse. Moreover, for arbitrary two-player games the running time of the algorithm matches the best-known upper bound, which is obtained in (Lipton, Markakis, and Mehta 2003).

Nov 4 & 11, 2014
Georgios Piliouras
Beyond Equilibria
    The notion of equilibrium is a central concept that lies on the intersection of game theory, control theory, dynamical systems and computer science. It provides a palpable and easy-to-understand handle for arguing about decentralized systems. This simplicity, however, comes at a cost since many times in practice this framework is too restrictive and cannot address key real life phenomena. These include the effects of transient system phenomena, the plausibility of persistent far-from-equilibrium behavior, as well as the quantitative analysis of systems with multiple equilibria.
    We will discuss a number of different techniques to address such challenges and pinpoint some key mathematical tools for:
    a) online learning/optimization based approaches (e.g., regret minimization) and their connections to convex relaxations of the notion of (Nash) equilibrium;
    b) the techniques and limitations of robust price of anarchy (including recent couplings of these ideas with stochastic/robust optimization);
    c) average case analysis techniques for systems with many attracting equilibria (from identifying system invariants to approximating the volumes of regions of attraction);
    d) novel frameworks for analyzing disequilibrium systems (e.g., persistent patterns).
    No background will be assumed in the respective disciplines and the talk will be self-contained.

Oct 28, 2014
Bernhard von Stengel
Department of Mathematics
London School of Economics
Nash Codes for Noisy Channels
    When a sender transmits a message as a ''codeword'' over a noisy channel, the receiver's optimal ''decoding'' of the possibly garbled channel output is to identify the message that has most likely been sent. We extend this standard approach by asking if the sender's action is also optimal. That is, given the receiver's decoding strategy, is the sender most likely understood correctly by transmitting the original codeword, rather than any other channel input? A ''Nash code'' has this desirable stability property, where encoding and decoding define a Nash equilibrium in this sender-receiver game with fully aligned interests of the players. We study this concept with examples, and show two sufficient conditions for Nash codes: receiver-optimal codes, and, somewhat surprisingly, _arbitrary_ codes for the classic binary channel (if the receiver breaks ties ''monotonically,'' which holds for generic priors or payoffs). Joint work with Penelope Hernandez (Valencia).

Oct 7 & 14, 2014
Quentin Berthet
Statistical and Computational Complexities
    In statistics and theoretical computer science, the word complexity is often used to measure the difficulty of a problem, and to describe the intrinsic nature of a given problem.
    I will give a short introduction to the notion of complexity in these two domains, and show some of the challenges that arise when we try to consider them simultaneously. I will present a setting that allows to describe the tradeoffs between statistical and computational efficiencies, and illustrate it through various high-dimensional problems on random vectors, graphs, and satisfiability formulas.

Sept 23, 2014
Umang Bhaskar
Minimizing Latency in Network Congestion Games using Equilibrium Outcomes

2012-13 CMI Seminars

December 11, 2012
Sachin Adlakha

December 4, 2012
Yakov Babichenko

November 27, 2012
Yakov Babichenko

November 13, 2012

October 30, 2012
Rita Ackerman - Clustering Axioms and Properties: Towards Throretical Foundations

October 23, 2012
Umang Bhaskar- Online Mixed Packing and Covering

October 16, 2012
Umang Bhaskar- Uniqueness of Equilibria in Atomic Splittable Routing Games

October 9, 2012
Siddharth Barman - Applications of Linear Programming in Combinatorial Optimization

October 2, 2012
Siddharth Barman - Applications of Linear Programming in Coding Theory


Past Events

Past CMI Faculty Lunches      Past CMI Seminars      Other Events