Articles for TCS Articles for TCS
- Paradigms for Unconditional Pseudorandom Generatorson February 18, 2024 at 11:00 pm
Abstract This is a survey of unconditional pseudorandom generators (PRGs). A PRG uses a short, truly random seed to generate a long, "pseudorandom" sequence of bits. To be more specific, for each restricted model of computation (e.g., bounded-depth circuits or read-once branching programs), we would like to design a PRG that "fools" the model, meaning that every function computable in the model behaves approximately the same when we plug in pseudorandom bits from the PRG as it does when we plug in truly random bits. In this survey, we discuss four major paradigms for designing PRGs: • We present several PRGs based on k-wise uniform generators, small-bias generators, and simple combinations thereof, including proofs of Viola's theorem on fooling low-degree polynomials [242] and Braverman's theorem on fooling AC0 circuits [36]. • We present several PRGs based on "recycling" random bits to take advantage of communication bottlenecks, such as the Impagliazzo-Nisan-Wigderson generator [131]. • We present connections between PRGs and computational hardness, including the Nisan-Wigderson framework for converting a hard Boolean function into a PRG [183]. • We present PRG frameworks based on random restrictions, including the "polarizing random walks" framework [49]. We explain how to use these paradigms to construct PRGs that work unconditionally, with no unproven complexity-theoretic assumptions. The PRG constructions use ingredients such as finite field arithmetic, expander graphs, and randomness extractors. The analyses use techniques such as Fourier analysis, sandwiching approximators, and simplification-under-restrictions lemmas. Suggested CitationPooya Hatami and William Hoza (2024), "Paradigms for Unconditional Pseudorandom Generators", Foundations and Trends® in Theoretical Computer Science: Vol. 16: No. 1-2, pp 1-210. http://dx.doi.org/10.1561/0400000109
- Approximate Degree in Classical and Quantum Computingon December 30, 2022 at 11:00 pm
AbstractThe approximate degree of a Boolean function f captures how well f can be approximated pointwise by low-degree polynomials. This monograph surveys what is known about approximate degree and illustrates its applications in theoretical computer science.A particular focus of the survey is a method of proving lower bounds via objects called dual polynomials. These represent a reformulation of approximate degree using linear programming duality. We discuss in detail a recent, powerful technique for constructing dual polynomials, called “dual block composition”.Suggested CitationMark Bun and Justin Thaler (2022), "Approximate Degree in Classical and Quantum Computing", Foundations and Trends® in Theoretical Computer Science: Vol. 15: No. 3-4, pp 229-423. http://dx.doi.org/10.1561/0400000107
- Multi-Valued Reasoning about Reactive Systemson November 30, 2022 at 11:00 pm
AbstractTraditional computer science is Boolean: a Turing machine accepts or rejects its input, and logic assertions are true or false. A primary use of logic in computer science has been the specification and verification of reactive systems. There, desired behaviors of systems are formally specified by temporal-logic formulas, and questions about systems and their behaviors are reduced to questions like satisfiability and model checking. While correctness is binary, many questions we want to ask about systems are multi-valued. The multivalued setting arises directly in systems with quantitative aspects, for example systems with fuzzy assignments or stochastic dynamics, and arises also in Boolean systems, where it origins from the semantics of the specification formalism. In particular, beyond checking whether a system satisfies its specification, we may want to evaluate the quality in which the specification is satisfied. The term “quality” may refer to many aspects of the behavior: we may want to prioritize different satisfaction alternatives, refer to delays, costs, and many more. In recent years, we have seen a growing effort in the formal-method community to shift from Boolean specification formalisms to multi-valued ones. The shift involves a development of multi-valued temporal logics as well as algorithms and tools for reasoning about such logics.This survey describes the basics of specification and verification of reactive systems, and the automata-theoretic approach for them: by translating temporal-logic formulas to automata, one reduces questions like satisfiability and model checking to decision problems on automata, like nonemptiness and language containment.We first describe the Boolean setting: temporal logics, and their applications in specification and verification. Since we care about on-going behaviors of non-terminating systems, the formalisms we study specify infinite computations, and we focus on the theoretical properties of automata on infinite words. The transition from finite to infinite words results in a beautiful mathematical model with much richer combinatorial properties. We then describe two multi-valued settings. The first is based on finite lattices and the second on arbitrary functions over [0, 1]. In both settings, the goal is to refine the Boolean correctness query to a quantitative-evaluation query. Accordingly, the formalisms we introduce are such that the satisfaction value of a temporal-logic formula in a model, or the membership value of a word in the language of an automaton, are multi valued, and classical decision problems become search problems.Suggested CitationOrna Kupferman (2022), "Multi-Valued Reasoning about Reactive Systems", Foundations and Trends® in Theoretical Computer Science: Vol. 15: No. 2, pp 126-228. http://dx.doi.org/10.1561/0400000083
- Quantified Derandomization: How to Find Water in the Oceanon October 11, 2022 at 10:00 pm
AbstractThe focus of this survey is the question of quantified derandomization, which was introduced by Goldreich and Wigderson [44]: Does derandomization of probabilistic algorithms become easier if we only want to derandomize algorithms that err with extremely small probability? How small does this probability need to be in order for the problem’s complexity to be affected?This question opens the door to studying natural relaxed versions of the derandomization problem, and allows us to construct algorithms that are more efficient than in the general case as well as to make gradual progress towards solving the general case. In the survey I describe the body of knowledge accumulated since the question’s introduction, focusing on the following directions and results:1. Hardness vs “quantified” randomness: Assuming sufficiently strong circuit lower bounds, we can derandomize probabilistic algorithms that err extremely rarely while incurring essentially no time overhead. 2. For general probabilistic polynomial-time algorithms, improving on the brute-force algorithm for quantified derandomization implies breakthrough circuit lower bounds, and this statement holds for any given probability of error. 3. Unconditional algorithms for quantified derandomization of low-depth circuits and formulas, as well as near-matching reductions of the general derandomization problem to quantified derandomization for such models. 4. Arithmetic quantified derandomization, and in particular constructions of hitting-set generators for polynomials that vanish extremely rarely. 5. Limitations of certain black-box techniques in quantified derandomization, as well as a tight connection between black-box quantified derandomization and the classic notion of pseudoentropy.Most of the results in the survey are from known works, but several results are either new or are strengthenings of known results. The survey also offers a host of concrete challenges and open questions surrounding quantified derandomization.Suggested CitationRoei Tell (2022), "Quantified Derandomization: How to Find Water in the Ocean", Foundations and Trends® in Theoretical Computer Science: Vol. 15: No. 1, pp 1-125. http://dx.doi.org/10.1561/0400000108
- Complexity Theory, Game Theory, and Economics: The Barbados Lectureson March 3, 2020 at 11:00 pm
AbstractThe goal of this monograph is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory, and (ii) to illustrate how gametheoretic questions have led to new and interesting complexity theory, including several very recent breakthroughs.Suggested CitationTim Roughgarden (2020), "Complexity Theory, Game Theory, and Economics: The Barbados Lectures", Foundations and Trends® in Theoretical Computer Science: Vol. 14: No. 3–4, pp 222-407. http://dx.doi.org/10.1561/0400000085