2018-2019
2019/05/01: Chris Peikert, "Noninteractive Zero Knowledge for NP from Learning With Errors"
Chris Peikert (University of Michigan)
Abstract:
We finally close the long-standing problem of constructing a noninteractive zero-knowledge (NIZK) proof system for any NP language with security based on the Learning With Errors (LWE) problem, and thereby on worst-case lattice problems. Our proof system instantiates a framework developed in a series of recent works for soundly applying the Fiat—Shamir transform using a hash function family that is correlation intractable for a suitable class of relations. Previously, such hash families were based either on “exotic” assumptions (e.g., indistinguishability obfuscation or optimal hardness of ad-hoc LWE variants) or, more recently, on the existence of circularly secure fully homomorphic encryption. However, none of these assumptions are known to be implied by LWE or worst-case hardness.
Our main technical contribution is a hash family that is correlation intractable for arbitrary size-S circuits, for any polynomially bounded S, based on LWE (with small polynomial approximation factors). Our construction can be instantiated in two possible “modes,” yielding a NIZK that is either computationally sound and statistically zero knowledge in the common random string model, or vice-versa in the common reference string model.
(This is joint work with Sina Shiehian. Paper: https://eprint.iacr.org/2019/158)
2019/04/17: Thatchaphol Saranurak, "Breaking Quadratic Time for Small Vertex Connectivity"
Thatchaphol Saranurak (TTIC)
Abstract:
Vertex connectivity is a classic extensively-studied problem. Given an integer k, its goal is to decide if an n-node m-edge graph can be disconnected by removing k vertices. Although a O(m)-time algorithm was postulated since 1974 [Aho, Hopcroft, and Ullman], and despite its sibling problem of edge connectivity being resolved over two decades ago [Karger STOC’96], so far no vertex connectivity algorithms are faster than O(n^2) time even for k = 4 and m = O(n). In the simplest case where m = O(n) and k = O(1), the O(n^2) bound dates five decades back to [Kleitman IEEE Trans. Circuit Theory’69]. For the general case, the best bound is \tilde{O}( \min{ kn^2, n^\omega + nk^\omega } ) [Henzinger, Rao, Gabow FOCS’96; Linial, Lovász, Wigderson FOCS’86].
In this talk, I will present a randomized Monte Carlo algorithm with \tilde{O}(m + k^3 n) time. This algorithm proves the conjecture by Aho, Hopcroft, and Ullman when k=O(1) up to a polylog factor, breaks the 50-year-old bound by Kleitman, is fastest for 4 < k < n^{0.456}. The story is similar for the directed graphs where we obtain an algorithm running time at most \tilde{O}(k^2 m).
The key to our results is to avoid computing single-source connectivity, which was needed by all previous exact algorithms and is not known to admit o(n^2) time. Instead, we design the first local algorithm for computing vertex connectivity; without reading the whole graph, our algorithm can find a separator of size at most k or certify that there is no separator of size at most k “near” a given seed node.
This talk is based on joint works with Danupon Nanongkai and Sorrachai Yingchareonthawornchai.
2019/04/03: Richard Peng, "Fully Dynamic Spectral Vertex Sparsifiers and Applications"
Richard Peng (Georgia Tech)
Abstract:
Problems arising from analyzing and understanding graph structures have motivated the development of many powerful tools for storing and compressing graphs and networks. Many, if not most, of these massive graphs are accumulations of updates over time. Maintaining updates and queries on dynamically changing graphs more efficiently than recomputing from scratch is a well-studied topic, with several important open problems related to global, optimization related queries.
We study dynamic graph algorithms for maintaining spectral vertex sparsifiers with respect to a subset of terminal vertices. Such sparsifiers preserve key structures in spectral algorithms, including effective resistances (which can be viewed as a numerical generalization of connectivity), solutions to systems of Laplacian linear equations, and energy of electrical flows between terminal vertices. We give data structures that maintain, in sublinear time, these sparsifiers under insertions/deletions of edges, as well as terminal additions.
This primitive in turn leads to sublinear time data structures for key primitives in spectral graph algorithms, including ones at the core of the “Laplacian paradigm” for designing graph optimization algorithms. In particular, we obtain O(m^{4/3}\epsilon^{-4}) time per query/update for effective resistances on unweighted graphs, and O(n^{11/12}\epsilon^{-5}) time per query/update for implicit access to linear system solutions, where \epsilon is the relative approximation accuracy.
The majority of the talk will focus on key components of this data structure: (1) an interpretation of vertex sparsifiers as a sum of random walks, (2) a suitable choice of terminals to keep these walks local, and (3) maintenance of local walks and numerical solutions. Potential avenues in generalizing these techniques to provide new building blocks for dynamic graph algorithms will also be discussed.
2019/03/20: Aleksandar Nikolov, "Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems"
Aleksandar Nikolov (University of Toronto)
Abstract:
Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, like Max Cut, for many others, like Max Sat, Max DiCut, and constraint satisfaction problems with global constraints, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known.
In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max Cut, Max 2-Sat, and Max DiCut, and derive new algorithms that are competitive with the best known results. We further show that our algorithms can be used, together with the Sum of Squares hierarchy, to approximate constraint satisfaction problems subject to multiple global cardinality constraints.
Joint work with Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Roy Schwartz, and Mohit Singh
2019/03/06: Shayan Oveis Gharan, "Strongly log concave polynomials, high dimensional simplicial complexes, and an FPRAS for counting Bases of Matroids"
Shayan Oveis Gharan (University of Washington)
Abstract:
A matroid is an abstract combinatorial object which generalizes the notions of spanning trees, and linearly independent sets of vectors. I will talk about an efficient algorithm based on the Markov Chain Monte Carlo technique to approximately count the number of bases of any given matroid.
The proof is based on a new connections between high dimensional simplicial complexes, and a new class of multivariate polynomials called completely log-concave polynomials. In particular, we exploit a fundamental fact from our previous work that the bases generating polynomial of any given matroid is a log-concave function over the positive orthant.
Based on joint works with Nima Anari, Kuikui Liu, and Cynthia Vinzant.
2019/02/20: Sepehr Assadi, "A Simple Sublinear-Time Algorithm for Counting Arbitrary Subgraphs via Edge Sampling"
Sepehr Assadi (Princeton University)
Abstract:
In the subgraph counting problem, we are given a (large) graph G(V, E) and a (small) graph H (e.g., a triangle), and the goal is to estimate the number of occurrences of H in G. Our focus in this talk is on designing sublinear-time algorithms for approximately computing number of occurrences of H in G in the setting where the algorithm is given query access to G. This problem has been studied in several recent work which primarily focused on specific families of graphs H such as triangles, cliques, and stars. However, not much is known about approximate counting of arbitrary graphs H in the literature. This is in sharp contrast to the closely related subgraph enumeration problem in which the goal is to list all copies of the subgraph H in G. The AGM bound shows that the maximum number of occurrences of any arbitrary subgraph H in a graph G with m edges is O(m^{p(H)}), where p(H) is the fractional edge cover number of H, and enumeration algorithms with matching runtime are known for every H.
In this talk, we bridge this gap between the subgraph counting and subgraph enumeration problems and present a simple sublinear-time algorithm that estimates the number of occurrences of any arbitrary graph H in G, denoted by \#H, to within a (1 \pm \varepsilon)-approximation factor with high probability in O(m^{p(H)} /\#H)\cdot \text{poly}(\log n,1/\varepsilon) time. Our algorithm is allowed the standard set of queries for general graphs, namely degree queries, pair queries and neighbor queries, plus an additional edge-sample query that returns an edge chosen uniformly at random. The performance of our algorithm matches those of Eden et al. [FOCS 2015, STOC 2018] for counting triangles and cliques and extend them to all choices of subgraph H under the additional assumption of edge-sample queries. Our results are also applicable to the more general problem of database join size estimation problem and for this slightly more general problem achieve optimal bounds for every choice of H.
Joint work with Michael Kapralov and Sanjeev Khanna.
2019/02/06: Ran Canetti, "Fully Bideniable Interactive Encryption"
Ran Canetti (Boston University and Tel Aviv University)
Abstract:
While standard encryption guarantees secrecy of the encrypted plaintext only against an attacker that has no knowledge of the communicating parties’ keys and randomness of encryption, deniable encryption [Canetti et al., Crypto’96] provides the additional guarantee that the plaintext remains secret even in face of entities that attempt to coerce (or bribe) the communicating parties to expose their internal states, including the plaintexts, keys and randomness. To achieve this guarantee, deniable encryption equips the parties with faking algorithms which allow them to generate fake keys and randomness that make the ciphertext appear consistent with any plaintext of the parties’ choice. To date, however, only partial results were known: Either deniability against coercing only the sender, or against coercing only the receiver [Sahai-Waters, STOC ‘14] or schemes satisfying weaker notions of deniability [O’Neil et al., Crypto ‘11].
In this paper we present the first fully bideniable interactive encryption scheme, thus resolving the 20-years-old open problem. Our scheme also provides an additional and new guarantee: Even if the sender claims that one plaintext was used and the receiver claims a different one, the adversary has no way of figuring out who is lying – the sender, the receiver, or both. This property, which we call off-the-record deniability, is useful when the parties don’t have means to agree on what fake plaintext to claim, or when one party defects against the other. Our protocol has three messages, which is optimal [Bendlin et al., Asiacrypt’11], and needs a globally available reference string. We assume subexponential indistinguishability obfuscation (IO) and one-way functions.
Joint work with Sunoo Park and Oxana Poburinnaya.
2018/12/12: Julia Chuzhoy, "Almost Polynomial Hardness of Node-Disjoint Paths in Grids"
Julia Chuzhoy (TTIC)
Abstract:
In the classical Node-Disjoint Paths (NDP) problem, we are given an n-vertex graph G, and a collection of pairs of its vertices, called demand pairs. The goal is to route as many of the demand pairs as possible, where to route a pair we need to select a path connecting it, so that all selected paths are disjoint in their vertices.
The best current algorithm for NDP achieves an O(\sqrt{n})-approximation, while, until recently, the best negative result was a roughly \Omega(\sqrt{\log n})-hardness of approximation. Recently, an improved 2^{\Omega(\sqrt{\log n})}-hardness of approximation for NDP was shown, even if the underlying graph is a subgraph of a grid graph, and all source vertices lie on the boundary of the grid. Unfortunately, this result does not extend to grid graphs.
The approximability of NDP in grids has remained a tantalizing open question, with the best upper bound of \tilde{O}(n^{1/4}), and the best lower bound of APX-hardness. In this talk we come close to resolving this question, by showing an almost polynomial hardness of approximation for NDP in grid graphs.
Our hardness proof performs a reduction from the 3COL(5) problem to NDP, using a new graph partitioning problem as a proxy. Unlike the more standard approach of employing Karp reductions to prove hardness of approximation, our proof is a Cook-type reduction, where, given an input instance of 3COL(5), we produce a large number of instances of NDP, and apply an approximation algorithm for NDP to each of them. The construction of each new instance of NDP crucially depends on the solutions to the previous instances that were found by the approximation algorithm.
Joint work with David H.K. Kim and Rachit Nimavat.
2018/11/28: Eric Balkanski, "The Adaptive Complexity of Maximizing a Submodular Function"
Eric Balkanski (Harvard University)
Abstract:
We introduce the study of submodular optimization in the adaptive complexity model to quantify the information theoretic complexity of black-box optimization in a parallel computation model. Informally, the adaptivity of an algorithm is the number of sequential rounds it makes when each round can execute polynomially-many function evaluations in parallel. Since submodular optimization is regularly applied on large datasets we seek algorithms with low adaptivity to enable speedups via parallelization. For the canonical problem of maximizing a monotone submodular function under a cardinality constraint, all that was known until recently is that the adaptivity needed to obtain a constant approximation is between 1 and \Omega(n). Our main result is a tight characterization showing that the adaptivity needed is, up to lower order terms, \Theta(\log n):
We describe an algorithm which requires O(\log n) sequential rounds and achieves an approximation that is arbitrarily close to 1/3;
We show that no algorithm can achieve an approximation better than O(1/\log n) with fewer than O(\log n/\log\log n) rounds. Thus, when allowing for parallelization, our algorithm achieves a constant factor approximation exponentially faster than any previous algorithm for submodular maximization. I will conclude the talk by surveying very recent results on submodular optimization in the adaptive complexity model.
Joint work with Yaron Singer.
2018/11/14: Urmila Mahadev, "Classical Homomorphic Encryption for Quantum Circuits"
Urmila Mahadev (UC Berkeley)
Abstract:
We present the first leveled fully homomorphic encryption scheme for quantum circuits with classical keys. The scheme allows a classical client to blindly delegate a quantum computation to a quantum server: an honest server is able to run the computation while a malicious server is unable to learn any information about the computation. We show that it is possible to construct such a scheme directly from a quantum secure classical homomorphic encryption scheme with certain properties. Finally, we show that a classical homomorphic encryption scheme with the required properties can be constructed from the learning with errors problem.
2018/10/31: Michal Koucky, "Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time"
Michal Koucky (Charles University)
Abstract:
Edit distance is a measure of similarity of two strings based on the minimum number of character insertions, deletions, and substitutions required to transform one string into the other. The edit distance can be computed exactly using a dynamic programming algorithm that runs in quadratic time. Andoni, Krauthgamer and Onak (2010) gave a nearly linear time algorithm that approximates edit distance within approximation factor \mathrm{poly}(\log n). In this talk I will present an algorithm with running time O(n^{2-2/7}) that approximates the edit distance within a constant factor.
Joint work with Diptarka Chakraborty, Debarati Das, Elazar Goldenberg, and Mike Saks.
2018/10/17: C. Seshadri, "Finding forbidden minors through random walks: (almost) n^{1/2} query one-sided testers for minor closed properties"
C. Seshadri (UC Santa Cruz)
Abstract:
Let G be an undirected, bounded degree graph with n vertices. Fix a finite graph H, and suppose one must remove \varepsilon n edges from G to make it H-minor free (for some small constant \varepsilon>0). We give a nearly n^{1/2} time algorithm that, with high probability, finds an H-minor in such a graph.
As an application, consider a graph G that requires \varepsilon n edge removals to make it planar. This result implies an algorithm, with the same running time, that produces a K_{3,3} or K_5 minor in G. No prior sublinear time bound was known for this problem. By the graph minor theorem, we get an analogous result for any minor-closed property.
Up to n^{o(1)} factors, this result resolves a conjecture of Benjamini-Schramm-Shapira (STOC 2008) on the existence of one-sided property testers for minor-closed properties. Furthermore, our algorithm is nearly optimal, by lower bounds of Czumaj et al (RSA 2014).
Joint work with Akash Kumar and Andrew Stolman
2018/10/03: Alex Andoni, "Parallel Graph Connectivity in Log Diameter Rounds"
Alex Andoni (Columbia University)
Abstract:
The MPC model has emerged as a compelling model for modern parallel systems, such as MapReduce, Hadoop and Spark, capturing well coarse-grained computation on large data. Here, data is distributed to processors, each of which has a sublinear (in the input data) amount of memory and we alternate between rounds of computation and rounds of communication, where each machine can communicate an amount of data as large as the size of its memory. This model is stronger than the classical PRAM model, and the recent research program has been to design algorithms whose running time is smaller than in the PRAM model.
One now-classic challenge is the graph connectivity problem. On an undirected graph with $n$ nodes and $m$ edges, $O(\log n)$ round connectivity algorithms have been known for over 35 years. However, no algorithms with better complexity bounds are known for MPC (in fact even impossible for some restricted algorithms).
We give a faster algorithms for the connectivity problem when parameterizing the time complexity as a function of the diameter of the graph. Our main result is a time $O(D \log\log_{m/n} n)$ connectivity algorithm for diameter-$D$ graphs, using $O(m)$ total memory.
Joint work with Zhao Song, Cliff Stein, Zhengyu Wang, Peilin Zhong (to appear in FOCS’18).
2018/08/19: Avishay Tal, "Oracle Separation of BQP and the Polynomial Hierarchy"
Avishay Tal (Simons Institute)
Abstract:
We present an oracle, A, relative to which \mathsf{BQP}^{A} is not contained in \mathsf{PH}^{A}.
Following the approach of Aaronson [STOC, 2010], our result is obtained by finding a distribution D over Boolean strings of length N such that:
There exists a quantum algorithm that runs in time{\rm polylog}(N) and distinguishes betweenD and the uniform distribution over Boolean strings of lengthN.
No Boolean circuit of quasi-polynomial size and constant depth can distinguish between D and the uniform distribution with advantage better than {\rm polylog}(N)/\sqrt{N}.
Joint work with Ran Raz.
2018/08/05: Michael Kearns, "Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness"
Michael Kearns (UPenn)
Abstract:
The most prevalent notions of fairness in machine learning are statistical definitions: they fix a small collection of pre-defined groups, and then ask for parity of some statistic of the classifier across these groups. Constraints of this form are susceptible to intentional or inadvertent “fairness gerrymandering”, in which a classifier appears to be fair on each individual group, but badly violates the fairness constraint on one or more structured subgroups defined over the protected attributes. We propose instead to demand statistical notions of fairness across exponentially (or infinitely) many subgroups, defined by a structured class of functions over the protected attributes. This interpolates between statistical definitions of fairness and recently proposed individual notions of fairness, but raises several computational challenges. It is no longer clear how to audit a fixed classifier to see if it satisfies such a strong definition of fairness.
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means it is computationally hard in the worst case, even for simple structured subclasses. But it also raises the possibility of applying empirically successful machine learning methods to the problem.
We then derive two algorithms that provably converge to the best fair classifier, given access to oracles which can solve the agnostic learning problem. The algorithms are based on a formulation of subgroup fairness as a two-player zero-sum game between a Learner and an Auditor. Our first algorithm provably converges in a polynomial number of steps. Our second algorithm enjoys only provably asymptotic convergence, but has the merit of simplicity and faster per-step computation. We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.
Joint work with Seth Neel, Aaron Roth, and Zhiwei Steven Wu.