Follow us on twitter and facebook.
(To subscribe to the IGAFIT Algorithmic Colloquium mailing list with announcements send an email to igafit-colloquium-request@mimuw.edu.pl with subject SUBSCRIBE.)

 


IGAFIT Algorithmic Colloquium #17

July 1, 2021
3SUM, All-Pairs Shortest Paths and Monochromatic Problems
Virginia Vassilevska Williams, MIT
Two of the main hypotheses of Fine-Grained Complexity concern the 3SUM problem from Computational Geometry and the All-Pairs Shortest Paths (APSP) from Graph Algorithms; both are for the word RAM model of computation. The 3SUM hypothesis asserts that given a set A of n integers, any algorithm needs n^{2-o(1)} time to determine whether A contains 3 integers summing to 0. The APSP hypothesis says that given an n node graph G with integer weights bounded by poly(n), any algorithm needs n^{3-o(1)} time to compute the shortest path distances between every pair of nodes of G.
Together with the Strong Exponential Time Hypothesis about the complexity of Boolean Satisfiability, these two hypotheses explain the best known running times for a huge variety of problems of interest. However, very little is known about how 3SUM and APSP relate to each other and whether their two hypotheses might be equivalent. In this talk we will discuss some indirect relationships between the two problems. In particular, we will show that 3SUM is fine-grained equivalent to a problem called Monochromatic Convolution, while both 3SUM and APSP are fine-grained reducible to a problem called Monochromatic Triangles. As both monochromatic problems merely ask about equality of colors (and no summations), these reductions show that the hardness of 3SUM and APSP is not due to having to sum large numbers as was originally believed.
Biography: Virginia Vassilevska Williams is a Steven and Renee Finn Career Development Associate Professor at MIT CSAIL. She obtained her Ph.D. from Carnegie Mellon University in 2008. After research and postdoctoral positions at the IAS in Princeton, UC Berkeley and Stanford, she spent 3.5 years as an assistant professor at Stanford University before joining MIT in early 2017. She is the recipient of an NSF CAREER award, a Google Faculty Research Award and an Alfred P. Sloan Research Fellowship, and in 2018 gave an invited lecture at the International Congress of Mathematicians.

see the video: colloquium #17


IGAFIT Algorithmic Colloquium #16

June 17, 2021
Polyhedral Techniques in Combinatorial Optimization
Ola Svensson, EPFL
In this talk, we will survey recent use of polyhedral techniques in combinatorial optimization. In particular, we introduce techniques for using linear programming formulations, even exponential-sized ones, to extract structure from problem instances that guides the design of algorithms.
The talk will focus on two results: a constant-factor approximation algorithm for the asymmetric traveling salesman problem and a deterministic parallel algorithm for the perfect matching problem. The so-called laminar structure of the considered linear programming formulations plays a central role in both these results.
Finally, we discuss several intriguing research directions and open questions related to these problems.

see the video: colloquium #16


IGAFIT Algorithmic Colloquium #15

June 10, 2021
Local Computation Algorithms
Ronitt Rubinfeld, MIT
Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input?
We survey recent work in the model of “local computation algorithms” which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. Though this model describes sequential computations, techniques from local distributed algorithms have been extremely important in designing efficient local computation algorithms.
In this talk, we describe results on a variety of problems for which sublinear time and space local computation algorithms have been developed — we will give special focus to finding maximal independent sets and generating random objects.

see the video: colloquium #15


IGAFIT Algorithmic Colloquium #14

May 20, 2021
Algorithms with Predictions: Simpler Analyses and Better Running Times
Sergei Vassilvitskii, Google Research New York
How can machine learning help traditional algorithm design and analysis? One idea is to equip online algorithms with some noisy predictions about the future. Recent work has shown that such “learning-augmented algorithms” can effectively interpolate between offline methods where the full input is known and worst-case online algorithms, resulting in a competitive ratio parameterized by the quality of the predictions.
In this talk, we expand on this work and show that the learning-augmented lens is powerful in and of itself. We first look at the classical secretary problem, and prove that different variants of this age-old problem can be thought of simply as different advice that is available to the algorithm. This view allows us to develop a single analysis to give tight bounds on competitive ratio for many of these variations.
We then turn from competitive ratio to running time, and show how that here too predictions can be helpful. Specifically, we consider the min weight perfect matching problem and show what kind of predictions can be used to improve the running time, effectively formalizing the “warm-start” problem.
Based on joint work with Michael Dinitz, Paul Duetting, Sungjin Im, Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Renato Paes Leme

see the video: colloquium #14


IGAFIT Algorithmic Colloquium #13

May 6, 2021
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Adam Smith, Boston University
Modern machine learning models are complex, and frequently encode surprising amounts of information about individual inputs. In extreme cases, complex models appear to memorize entire input examples, including seemingly irrelevant information (social security numbers from text, for example). In this paper, we aim to understand whether this sort of memorization is necessary for accurate learning.
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples. This remains true even when the examples are high-dimensional and have entropy much higher than the sample size, and even when most of that information is ultimately irrelevant to the task at hand. Further, our results do not depend on the training algorithm or the class of models used for learning.
Our problems are simple and fairly natural variants of the next-symbol prediction and the cluster labeling tasks. These tasks can be seen as abstractions of image- and text-related prediction problems. To establish our results, we reduce from a family of one-way communication problems for which we prove new information complexity lower bounds.
Time permitting, I’ll lay out some open problems and directions for future work.
Based on joint work with Mark Bun, Gavin Brown, Vitaly Feldman, and Kunal Talwar, to appear at STOC 2021 ( https://arxiv.org/abs/2012.06421 ).

see the video: colloquium #13


IGAFIT Algorithmic Colloquium #12

April 22, 2021
Prophet and Secretary Online Algorithms for Matching in General Graphs
Michal Feldman, Tel Aviv University
A common tension in market scenarios is choosing the right timing to commit to a decision. This tension is captured by the mathematical literature of optimal stopping theory, within two models that have become to be known as the secretary problem and the prophet inequality. In these models, a sequence of originally-unknown values arrive one by one. Upon arrival, the online algorithm observes the value and should decide whether or not to accept it. In secretary settings, the value sequence is arbitrary, but the values arrive in a uniformly random order. In prophet settings, every value is drawn from a known probability distribution, but the arrival order is arbitrary.
In this talk I will review the basic settings of secretary and prophet, as well as previous extensions to matching in bipartite graphs with 1-sided vertex arrival. I will then present our recent work, which studies online algorithms (in both secretary and prophet settings) for matching in general graphs. We provide tight competitive ratios for both secretary and prophet matching scenarios under vertex arrival. Under edge arrival, we provide competitive ratios that improve upon the state of the art.
Based on joint work with Tomer Ezra, Nick Gravin and Zhihao Tang.

see the video: colloquium #12


IGAFIT Algorithmic Colloquium #11

April 8, 2021
Fine-Grained Complexity of Optimization Problems
Karl Bringmann, Saarland University
Fine-grained complexity theory is the area of theoretical computer science that proves conditional lower bounds based on conjectures such as the Strong Exponential Time Hypothesis. This enables the design of “best-possible” algorithms, where the running time of the algorithm matches a conditional lower bound.
This talk surveys the developments of fine-grained complexity on classic optimization problems such as Subset Sum. In particular, since recently we know a best-possible algorithm for solving Subset Sum in pseudopolynomial time. We survey this result and the subsequent developments on approximation algorithms, output sensitive algorithms, and pseudopolynomial algorithms under different parameters. We also discuss extensions to other optimization problems such as Knapsack and Partition, scheduling, and integer linear programming.

see the video: colloquium #11


IGAFIT Algorithmic Colloquium #10

March 25, 2021
Beyond Worst-Case Analysis
Tim Roughgarden, Columbia University
One of the primary goals of the mathematical analysis of algorithms is to provide guidance about which algorithm is the “best” for solving a given computational problem. Worst-case analysis summarizes the performance profile of an algorithm by its worst performance on any input of a given size, implicitly advocating for the algorithm with the best-possible worst-case performance. Strong worst-case guarantees are the holy grail of algorithm design, providing an application-agnostic certification of an algorithm’s robustly good performance.
However, for many fundamental problems and performance measures, such guarantees are impossible and a more nuanced analysis approach is called for. Research in “beyond worst-case analysis” develops alternatives to worst-case analysis, with applications ranging from clustering to linear programming to neural network training. This talk will highlight a mix of classic results, recent developments, and open questions in the area.

see the video: colloquium #10


IGAFIT Algorithmic Colloquium #9

March 11, 2021
Graph Clustering Algorithms: Theoretical Insights For Practice
Vincent Cohen-Addad, Google Zürich
A classic problem in machine learning and data analysis is to partition the vertices of a graph into very dense subgraphs with low expansion. In practice, the most popular approaches often rely on local search or greedy algorithms; not only for the ease of implementation, their scalability and their efficiency, but also because of the accuracy of these methods on many real world graphs. Even though we can’t prove that these algorithms have good performance guarantees in the worst-case, can we understand what make them so successful in practice? 
We review two popular objective functions — modularity and correlation clustering — and shed some light on the inner-workings of a popular heuristic (the so-called Louvain algorithm) by providing a “beyond worst-case” analysis. We then describe some key ideas to design graph clustering algorithms in massively parallel environments so as to optimize the above objective in only a constant number of rounds.

see the video: colloquium #9


IGAFIT Algorithmic Colloquium #8

February 11, 2021 
Deterministic Mincut in Almost-Linear Time
Jason Li, Carnegie Mellon University
We present a deterministic (global) mincut algorithm for weighted, undirected graphs that runs in m1+o(1) time, answering an open question of Karger from the 1990s. To obtain our result, we de-randomize the construction of the skeleton graph in Karger’s near-linear time mincut algorithm, which is its only randomized component. In particular, we partially de-randomize the well-known Benczur-Karger graph sparsification technique by random sampling, which we accomplish by the method of pessimistic estimators. Our main technical component is designing an efficient pessimistic estimator to capture the cuts of a graph, which involves harnessing the expander decomposition framework introduced in recent work by Goranci et al. (SODA 2021). As a side-effect, we obtain a structural representation of all approximate mincuts in a graph, which may have future applications.

see the video: Colloquium #8


IGAFIT Algorithmic Colloquium #7

January 28, 2021
Paths, cycles and beyond
Fedor V. Fomin, University of Bergen
I will speak about old and new parameterized algorithms for finding long paths and cycles in graphs. In particular, we discuss problems of finding paths and cycles whose lengths are beyond some “trivial” bounds.

see the video: Colloquium #7


IGAFIT Algorithmic Colloquium #6

December 10, 2020
Circulation control for faster minimum cost flow in unit-capacity graphs
Adrian Vladu, Université de Paris

In recent years, continuous optimization primitives have become an essential component in the algorithmist’s toolkit. Among other developments, this has led to tremendous progress on the quest for developing fast graph algorithms.
In this talk I will give an overview of the techniques based on interior point methods, which have been instrumental to obtaining a set of faster algorithms for fundamental graph problems, which include an m4/3+o(1) log W time algorithm for solving the minimum cost flow problem in graphs with unit capacity.
For sparse graphs, our algorithm improves over the best known running time for this problem and, by well-known reductions, also implies improved running times for the shortest path problem with negative weights, minimum cost bipartite b-matching when |b|1 = O(m), and recovers the running time of the recent unit capacity maximum flow algorithm due to Liu-Sidford.

Based on https://arxiv.org/abs/2003.04863

see the video: Colloquium #6


IGAFIT Algorithmic Colloquium #5

November 26, 2020
Fully online matching
Zhiyi Huang, University of Hong Kong

Motivated by applications such as ride sharing, we introduce a fully online model of maximum cardinality matching in which all vertices arrive and depart online. On the arrival of a vertex, its edges to previously-arrived vertices are revealed. The vertex can be matched anytime before its departure, which is after all its neighbors’ arrivals. This fully online matching model generalizes the online bipartite matching model which only considers bipartite graphs and assumes one side of the vertices to be offline.

We generalize the Ranking and Water-Filling algorithms to fully online matching and its fractional relaxation respectively. The former is 0.521-competitive in general graphs, and 0.567-competitive in bipartite graphs, and the latter is 0.585 competitive in both cases. Further, we prove that fully online matching is strictly harder than online bipartite matching, because no online algorithm can be 0.6317 <(1-1/e)-competitive in fully online matching even for bipartite graphs. Finally, we introduce new algorithms that are strictly better than Ranking and Water-Filling in fully online matching.

see the video: Colloquium #5


IGAFIT Algorithmic Colloquium #4

November 12, 2020
New lower and upper bounds for quantile summary algorithms
Graham Cormode, University of Warwick

Finding the median, or more generally quantiles, is a core problem in data analysis. The question has been heavily studied in streaming and related models of computation, for over four decades. In this talk I will present some recent advances:

  • Lower bounds for approximating quantiles in the deterministic comparison model, for additive error, which show that the best known algorithm is in fact optimal
  • Upper bounds for relative error epsilon-approximations of quantiles, which improves over previous results and exceed the best known lower bounds by only an O(log(1/ε)3/2) factor.

This covers joint work with Pavel Veselý, Justin Thaler, Edo Liberty and Zohar Karnin.

see the video: Colloquium #4


IGAFIT Algorithmic Colloquium #3

October 29, 2020
A (slightly) improved approximation algorithm for metric TSP
Nathan Klein, University of Washington
In this talk, he describes recent work in which he obtains a 3/2-ε approximation algorithm for metric TSP, for some ε>10-36. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. The talk focuses on giving an overview of the key ideas involved. This is joint work with Anna Karlin and Shayan Oveis Gharan

see the video: Colloquium #3


IGAFIT Algorithmic Colloquium #2

October 15, 2020
An almost-linear time deterministic algorithm for expander decomposition
Thatchaphol Saranurak, Toyota Technological Institute at Chicago
Expander decomposition is a powerful tool in many areas on graph algorithms (e.g. approximation algorithm, dynamic algorithm, distributed algorithm, property testing, and sketching).
We give a deterministic algorithm for finding this decomposition in almost-linear time. Previous algorithms are either randomized or take quadratic time.
As a consequence, we resolve a major open problem if there is a deterministic dynamic connectivity algorithm with n0.4999 worst-case update time by giving an algorithm with no(1) worst-case update time. The result also implies almost-linear-time deterministic algorithms for approximate max flow, electric flow, and Laplacian solvers.
Joint work with Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, and Richard Peng.

see the video: Colloquium #2


IGAFIT Algorithmic Colloquium #1

October 1, 2020
An improved approximation algorithm for ATSP
Vera Traub, ETH Zürich
In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it.
Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio.
Overall, we improve the approximation ratio from 506 to 22 + ε for any ε > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.
This is joint work with Jens Vygen.

see the video: Colloquium #1


Follow us also on twitter and facebook. (To subscribe to the ICAFIT Algorithmic Colloquium mailing list with announcements send an email to igafit-colloquium-request@mimuw.edu.pl with subject SUBSCRIBE.)