IGAFIT Algorithmic Colloquium #15
|June 10, 2021|
|Local Computation Algorithms|
|Ronitt Rubinfeld, MIT|
Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input?
We survey recent work in the model of “local computation algorithms” which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. Though this model describes sequential computations, techniques from local distributed algorithms have been extremely important in designing efficient local computation algorithms.
In this talk, we describe results on a variety of problems for which sublinear time and space local computation algorithms have been developed — we will give special focus to finding maximal independent sets and generating random objects.
|see the video: (TBD) colloquium #15|
IGAFIT Algorithmic Colloquium #14
|May 20, 2021|
|Algorithms with Predictions: Simpler Analyses and Better Running Times|
|Sergei Vassilvitskii, Google Research New York|
How can machine learning help traditional algorithm design and analysis? One idea is to equip online algorithms with some noisy predictions about the future. Recent work has shown that such “learning-augmented algorithms” can effectively interpolate between offline methods where the full input is known and worst-case online algorithms, resulting in a competitive ratio parameterized by the quality of the predictions.
In this talk, we expand on this work and show that the learning-augmented lens is powerful in and of itself. We first look at the classical secretary problem, and prove that different variants of this age-old problem can be thought of simply as different advice that is available to the algorithm. This view allows us to develop a single analysis to give tight bounds on competitive ratio for many of these variations.
We then turn from competitive ratio to running time, and show how that here too predictions can be helpful. Specifically, we consider the min weight perfect matching problem and show what kind of predictions can be used to improve the running time, effectively formalizing the “warm-start” problem.
Based on joint work with Michael Dinitz, Paul Duetting, Sungjin Im, Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Renato Paes Leme
|see the video: colloquium #14|
IGAFIT Algorithmic Colloquium #13
|May 6, 2021|
|When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?|
|Adam Smith, Boston University|
Modern machine learning models are complex, and frequently encode surprising amounts of information about individual inputs. In extreme cases, complex models appear to memorize entire input examples, including seemingly irrelevant information (social security numbers from text, for example). In this paper, we aim to understand whether this sort of memorization is necessary for accurate learning.
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples. This remains true even when the examples are high-dimensional and have entropy much higher than the sample size, and even when most of that information is ultimately irrelevant to the task at hand. Further, our results do not depend on the training algorithm or the class of models used for learning.
Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks. These tasks can be seen as abstractions of image- and text-related prediction problems. To establish our results, we reduce from a family of one-way communication problems for which we prove new information complexity lower bounds.
Time permitting, I’ll lay out some open problems and directions for future work.
Based on joint work with Mark Bun, Gavin Brown, Vitaly Feldman, and Kunal Talwar, to appear at STOC 2021 ( https://arxiv.org/abs/2012.06421 ).
|see the video: colloquium #13 Part 1; Part 2|
IGAFIT Algorithmic Colloquium #12
|April 22, 2021|
|Prophet and Secretary Online Algorithms for Matching in General Graphs|
|Michal Feldman, Tel Aviv University|
|A common tension in market scenarios is choosing the right timing to commit to a decision. This tension is captured by the mathematical literature of optimal stopping theory, within two models that have become to be known as the secretary problem and the prophet inequality. In these models, a sequence of originally-unknown values arrive one by one. Upon arrival, the online algorithm observes the value and should decide whether or not to accept it. In secretary settings, the value sequence is arbitrary, but the values arrive in a uniformly random order. In prophet settings, every value is drawn from a known probability distribution, but the arrival order is arbitrary.
In this talk I will review the basic settings of secretary and prophet, as well as previous extensions to matching in bipartite graphs with 1-sided vertex arrival. I will then present our recent work, which studies online algorithms (in both secretary and prophet settings) for matching in general graphs. We provide tight competitive ratios for both secretary and prophet matching scenarios under vertex arrival. Under edge arrival, we provide competitive ratios that improve upon the state of the art.
Based on joint work with Tomer Ezra, Nick Gravin and Zhihao Tang.
|see the video: colloquium #12|
IGAFIT Algorithmic Colloquium #11
|April 8, 2021|
|Fine-Grained Complexity of Optimization Problems|
|Karl Bringmann, Saarland University|
|Fine-grained complexity theory is the area of theoretical computer science that proves conditional lower bounds based on conjectures such as the Strong Exponential Time Hypothesis. This enables the design of “best-possible” algorithms, where the running time of the algorithm matches a conditional lower bound.|
This talk surveys the developments of fine-grained complexity on classic optimization problems such as Subset Sum. In particular, since recently we know a best-possible algorithm for solving Subset Sum in pseudopolynomial time. We survey this result and the subsequent developments on approximation algorithms, output sensitive algorithms, and pseudopolynomial algorithms under different parameters. We also discuss extensions to other optimization problems such as Knapsack and Partition, scheduling, and integer linear programming.
|see the video: colloquium #11|
IGAFIT Algorithmic Colloquium #10
|March 25, 2021|
|Beyond Worst-Case Analysis|
|Tim Roughgarden, Columbia University|
|One of the primary goals of the mathematical analysis of algorithms is to provide guidance about which algorithm is the “best” for solving a given computational problem. Worst-case analysis summarizes the performance profile of an algorithm by its worst performance on any input of a given size, implicitly advocating for the algorithm with the best-possible worst-case performance. Strong worst-case guarantees are the holy grail of algorithm design, providing an application-agnostic certification of an algorithm’s robustly good performance.|
However, for many fundamental problems and performance measures, such guarantees are impossible and a more nuanced analysis approach is called for. Research in “beyond worst-case analysis” develops alternatives to worst-case analysis, with applications ranging from clustering to linear programming to neural network training. This talk will highlight a mix of classic results, recent developments, and open questions in the area.
|see the video: colloquium #10|
IGAFIT Algorithmic Colloquium #9
|March 11, 2021|
|Graph Clustering Algorithms: Theoretical Insights For Practice|
|Vincent Cohen-Addad, Google Zürich|
|A classic problem in machine learning and data analysis is to partition the vertices of a graph into very dense subgraphs with low expansion. In practice, the most popular approaches often rely on local search or greedy algorithms; not only for the ease of implementation, their scalability and their efficiency, but also because of the accuracy of these methods on many real world graphs. Even though we can’t prove that these algorithms have good performance guarantees in the worst-case, can we understand what make them so successful in practice? |
We review two popular objective functions — modularity and correlation clustering — and shed some light on the inner-workings of a popular heuristic (the so-called Louvain algorithm) by providing a “beyond worst-case” analysis. We then describe some key ideas to design graph clustering algorithms in massively parallel environments so as to optimize the above objective in only a constant number of rounds.
|see the video: colloquium #9|
IGAFIT Algorithmic Colloquium #8
|February 11, 2021|
|Deterministic Mincut in Almost-Linear Time|
|Jason Li, Carnegie Mellon University|
|We present a deterministic (global) mincut algorithm for weighted, undirected graphs that runs in m1+o(1) time, answering an open question of Karger from the 1990s. To obtain our result, we de-randomize the construction of the skeleton graph in Karger’s near-linear time mincut algorithm, which is its only randomized component. In particular, we partially de-randomize the well-known Benczur-Karger graph sparsification technique by random sampling, which we accomplish by the method of pessimistic estimators. Our main technical component is designing an efficient pessimistic estimator to capture the cuts of a graph, which involves harnessing the expander decomposition framework introduced in recent work by Goranci et al. (SODA 2021). As a side-effect, we obtain a structural representation of all approximate mincuts in a graph, which may have future applications.|
IGAFIT Algorithmic Colloquium #7
|January 28, 2021|
|Paths, cycles and beyond|
|Fedor V. Fomin, University of Bergen|
|I will speak about old and new parameterized algorithms for finding long paths and cycles in graphs. In particular, we discuss problems of finding paths and cycles whose lengths are beyond some “trivial” bounds.|
IGAFIT Algorithmic Colloquium #6
|December 10, 2020|
|Circulation control for faster minimum cost flow in unit-capacity graphs|
|Adrian Vladu, Université de Paris|
|In recent years, continuous optimization primitives have become an essential component in the algorithmist’s toolkit. Among other developments, this has led to tremendous progress on the quest for developing fast graph algorithms.|
In this talk I will give an overview of the techniques based on interior point methods, which have been instrumental to obtaining a set of faster algorithms for fundamental graph problems, which include an m4/3+o(1) log W time algorithm for solving the minimum cost flow problem in graphs with unit capacity.
For sparse graphs, our algorithm improves over the best known running time for this problem and, by well-known reductions, also implies improved running times for the shortest path problem with negative weights, minimum cost bipartite b-matching when |b|1 = O(m), and recovers the running time of the recent unit capacity maximum flow algorithm due to Liu-Sidford.
Based on https://arxiv.org/abs/2003.04863
IGAFIT Algorithmic Colloquium #5
|November 26, 2020|
|Fully online matching|
|Zhiyi Huang, University of Hong Kong|
|Motivated by applications such as ride sharing, we introduce a fully online model of maximum cardinality matching in which all vertices arrive and depart online. On the arrival of a vertex, its edges to previously-arrived vertices are revealed. The vertex can be matched anytime before its departure, which is after all its neighbors’ arrivals. This fully online matching model generalizes the online bipartite matching model which only considers bipartite graphs and assumes one side of the vertices to be offline.|
We generalize the Ranking and Water-Filling algorithms to fully online matching and its fractional relaxation respectively. The former is 0.521-competitive in general graphs, and 0.567-competitive in bipartite graphs, and the latter is 0.585 competitive in both cases. Further, we prove that fully online matching is strictly harder than online bipartite matching, because no online algorithm can be 0.6317 <(1-1/e)-competitive in fully online matching even for bipartite graphs. Finally, we introduce new algorithms that are strictly better than Ranking and Water-Filling in fully online matching.
|see the video: Colloquium #5|
IGAFIT Algorithmic Colloquium #4
|November 12, 2020|
|New lower and upper bounds for quantile summary algorithms|
|Graham Cormode, University of Warwick|
|Finding the median, or more generally quantiles, is a core problem in data analysis. The question has been heavily studied in streaming and related models of computation, for over four decades. In this talk I will present some recent advances:
IGAFIT Algorithmic Colloquium #3
|October 29, 2020|
|A (slightly) improved approximation algorithm for metric TSP|
|Nathan Klein, University of Washington|
|In this talk, he describes recent work in which he obtains a 3/2-ε approximation algorithm for metric TSP, for some ε>10-36. This slightly improves over the classical 3/2 approximation algorithm due to Christofides  and Serdyukov . The talk focuses on giving an overview of the key ideas involved. This is joint work with Anna Karlin and Shayan Oveis Gharan. |
IGAFIT Algorithmic Colloquium #2
|October 15, 2020|
|An almost-linear time deterministic algorithm for expander decomposition|
|Thatchaphol Saranurak, Toyota Technological Institute at Chicago|
|Expander decomposition is a powerful tool in many areas on graph algorithms (e.g. approximation algorithm, dynamic algorithm, distributed algorithm, property testing, and sketching).|
We give a deterministic algorithm for finding this decomposition in almost-linear time. Previous algorithms are either randomized or take quadratic time.
As a consequence, we resolve a major open problem if there is a deterministic dynamic connectivity algorithm with n0.4999 worst-case update time by giving an algorithm with no(1) worst-case update time. The result also implies almost-linear-time deterministic algorithms for approximate max flow, electric flow, and Laplacian solvers.
Joint work with Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, and Richard Peng.
IGAFIT Algorithmic Colloquium #1
|October 1, 2020|
|An improved approximation algorithm for ATSP|
|Vera Traub, ETH Zürich|
|In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it.|
Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio.
Overall, we improve the approximation ratio from 506 to 22 + ε for any ε > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.
This is joint work with Jens Vygen.
Follow us also on twitter and facebook. (To subscribe to the ICAFIT Algorithmic Colloquium mailing list with announcements send an email to email@example.com with subject SUBSCRIBE.)