 Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, Till Tantau:
Dynamic Kernels for Hitting Sets and Set Packing.
Technischer Bericht ,
Electronic Colloquium on Computational Complexity,
2019.
Website anzeigen  Zusammenfassung anzeigen
Computing kernels for the hitting set problem (the problem of
finding a size$k$ set that intersects each hyperedge of a
hypergraph) is a wellstudied computational problem. For hypergraphs
with $m$ hyperedges, each of size at most~$d$, the best algorithms
can compute kernels of size $O(k^d)$ in time $O(2^d m)$. In this
paper we generalize the task to the \emph{dynamic} setting where
hyperedges may be continuously added and deleted and we always have
to keep track of a hitting set kernel (including moments when no
size$k$ hitting set exists). We present a deterministic solution,
based on a novel data structure, that needs worstcase time
$O^*(3^d)$ for updating the kernel upon hyperedge inserts and
time~$O^*(5^d)$ for updates upon deletions  thus nearly matching
the time $O^*(2^d)$ needed by the best static algorithm per
hyperedge. As a novel technical feature, our approach does not use
the standard replacesunflowersbytheircores methodology, but
introduces a generalized concept that is actually easier to compute
and that allows us to achieve a kernel size of $\sum_{i=0}^d k^i$
rather than the typical size $d!\cdot k^d$ resulting from the Sunflower
Lemma. We also show that our approach extends to the dual problem of
finding packings in hypergraphs (the problem of finding $k$ pairwise
disjoint hyperedges), albeit with a slightly larger kernel size of
$\sum_{i=0}^d d^i(k1)^i$.
 Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
In Proceedings of STACS 2019, LIPIcs, LIPIcs,
2019.
Website anzeigen  PDF anzeigen  Zusammenfassung anzeigen
Color coding is an algorithmic technique used in parameterized
complexity theory to detect ``small'' structures inside graphs. The
idea is to derandomize algorithms that first randomly
color a graph and then search for an easilydetectable, small color
pattern. We transfer color coding to the world of descriptive
complexity theory by characterizing  purely in terms of the
syntactic structure of describing formulas  when the powerful
secondorder quantifiers representing a random coloring can be
replaced by equivalent, simple firstorder formulas. Building on
this result, we identify syntactic properties of firstorder
quantifiers that can be eliminated from formulas describing
parameterized problems. The result applies to many packing and
embedding problems, but also to the long path problem. Together with a
new result on the parameterized complexity of formula families
involving only a fixed number of variables, we get that many
problems lie in FPT just because of the way they are
commonly described using logical formulas.
 Max Bannach, Malte Skambath, Till Tantau:
Towards WorkEfficient Parallel Parameterized Algorithms.
In Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), Springer,
2019.
Website anzeigen  PDF anzeigen  Zusammenfassung anzeigen
Parallel parameterized complexity theory studies how
fixedparameter tractable (fpt) problems can be solved in
parallel. Previous theoretical work focused on parallel
algorithms that are very fast in principle, but did not take into
account that when we only have a small number of processors
(between 2 and, say, 1024), it is more important
that the parallel algorithms are workefficient. In the
present paper we investigate how workefficient fpt algorithms can
be designed. We review standard methods from fpt theory, like
kernelization, search trees, and interleaving, and prove
tradeoffs for them between work efficiency and runtime
improvements. This results in a toolbox for
developing workefficient parallel fpt algorithms.
 Max Bannach, Till Tantau:
Parallel Multivariate MetaTheorems.
In Proceedings of the 11th International Symposium on Parameterized and Exact Computation (IPEC 2016), LIPIcs,
2016.
Website anzeigen  PDF anzeigen  Zusammenfassung anzeigen
Fixedparameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, ``tractable'' just meant ``solvable in polynomial time,'' but especially modern hardware raises the question of whether we can also achieve ``solvable in polylogarithmic parallel time.'' A framework for this study of \emph{parallel fixedparameter tractability} is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic metatheorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic secondorder logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle's Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender's, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input.
 Michael Elberfeld, Martin Grohe, Till Tantau:
Where FirstOrder and Monadic SecondOrder Logic Coincide.
Transactions on Computational Logic, (Volume 17 Issue 4, November 2016 Article No. 25)2016.
 Malte Skambath, Till Tantau:
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration.
In GD 2016: Graph Drawing and Network Visualization, Band 9801 von LNCS, S. 572586.
Springer,
2016.
Website anzeigen  Zusammenfassung anzeigen
While the algorithmic drawing of static trees is wellunderstood and wellsupported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular Open image in new window typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running Open image in new window on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NPcomplete.
 Malte Skambath, Till Tantau:
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration Analysis of Binary Search Trees.
Technischer Bericht arXiv:1608.08385,
CoRR,
2016.
Website anzeigen  Zusammenfassung anzeigen
While the algorithmic drawing of static trees is wellunderstood and wellsupported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular TEX typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running TEX on the documents then results in documents in the SVG format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NPcomplete.
 Till Tantau:
A Gentle Introduction to Applications of Algorithmic Metatheorems for Space and Circuit Classes.
Algorithms, 9(3):144, 2016.
Website anzeigen  Zusammenfassung anzeigen
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle’s Theorem, all monadic secondorder (“in a certain logic”) properties of graphs of bounded tree width (“structured in a certain way”) can be solved in linear time (“with a certain amount of resources”). Such theorems have become valuable tools in algorithmics: if a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. This paper is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space and circuit classes, and tries to give a flavor of the range of their applications.
 Max Bannach, Christoph Stockhusen, Till Tantau:
Fast Parallel FixedParameter Algorithms via Color Coding.
In Proceedings of the 10th International Symposium on Parameterized and Exact Computation (IPEC 2015), LIPIcs,
2015.
Website anzeigen  PDF anzeigen  Zusammenfassung anzeigen
Fixedparameter algorithms have been successfully applied to solve numerous difficult problems within acceptable time bounds on large inputs. However, most fixedparameter algorithms are inherently \emph{sequential} and, thus, make no use of the parallel hardware present in modern computers. We show that parallel fixedparameter algorithms do not only exist for numerous parameterized problems from the literature  including vertex cover, packing problems, cluster editing, cutting vertices, finding embeddings, or finding matchings  but that there are parallel algorithms working in \emph{constant} time or at least in time \emph{depending only on the parameter} (and not on the size of the input) for these problems. Phrased in terms of complexity classes, we place numerous natural parameterized problems in parameterized versions of AC$^0$. On a more technical level, we show how the \emph{color coding} method can be implemented in constant time and apply it to embedding problems for graphs of bounded treewidth or treedepth and to model checking firstorder formulas in graphs of bounded degree.
 Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space and Circuit Complexity of Parameterized Problems: Classes and Completeness.
Algorithmica, 71(3):661701, 2015.
Website anzeigen  Zusammenfassung anzeigen
The parameterized complexity of a problem is generally considered
``settled'' once it has been shown to be fixedparameter
tractable or to be complete for a class in a parameterized
hierarchy such as the weft hierarchy. Several natural
parameterized problems have, however, resisted such a
classification. In the present paper we argue that, %at least in
some cases, this is due to the fact that the parameterized
complexity of these problems can be better understood in terms of
their \emph{parameterized space} or \emph{parameterized circuit}
complexity. This includes wellstudied, natural problems like the
feedback vertex set problem, the associative generability
problem, or the longest common subsequence problem. We show that
these problems lie in and may even be complete for different
parameterized space classes, leading to new insights into the
problems' complexity. The classes we study are defined in terms
of different forms of bounded nondeterminism and simultaneous
timespace bounds.
 Till Tantau:
Existential Secondorder Logic over Graphs: A Complete Complexitytheoretic Classification.
In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015), Band 30 von Leibniz International Proceedings in Informatics (LIPIcs), S. 703715.
Schloss DagstuhlLeibnizZentrum für Informatik,
2015.
Website anzeigen  Zusammenfassung anzeigen
Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a property is expressible in existential secondorder logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolaitis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class over graphs either contains an NPcomplete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under firstorder reductions is either FO, L, NL, or NP. For undirected selfloopfree graphs two containment results are especially challenging to prove: containment in L for the prefix \exists R_1\cdots \exists R_n \forall x \exists y and containment in FO for the prefix \exists M \forall x \exists y for monadic M. The complex argument by Gottlob et al. concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to firstorder computations. A different challenge is posed by formulas with the prefix \exists M \forall x\forall y, which we show to express special constraint satisfaction problems that lie in L.
 Christoph Stockhusen, Till Tantau:
Completeness Results for Parameterized Space Classes.
Technischer Bericht arxiv:1308.2892,
ArXiv,
2013.
Website anzeigen  Zusammenfassung anzeigen
The parameterized complexity of a problem is considered "settled" once it has
been shown to lie in FPT or to be complete for a class in the Whierarchy or a
similar parameterized hierarchy. Several natural parameterized problems have,
however, resisted such a classification. At least in some cases, the reason is
that upper and lower bounds for their parameterized space complexity have
recently been obtained that rule out completeness results for parameterized
time classes. In this paper, we make progress in this direction by proving that
the associative generability problem and the longest common subsequence problem
are complete for parameterized space classes. These classes are defined in
terms of different forms of bounded nondeterminism and in terms of simultaneous
timespace bounds. As a technical tool we introduce a "union operation" that
translates between problems complete for classical complexity classes and for
Wclasses.
 Christoph Stockhusen, Till Tantau:
Completeness Results for Parameterized Space Classes.
In 8th International Symposium on Parameterized and Exact Computation (IPEC 2013), Lecture Notes in Computer Science, Springer,
2013 (noch nicht erschienen).
Zusammenfassung anzeigen
The parameterized complexity of a problem is generally considered
``settled'' once it has been shown to lie in FPT or to be complete
for a class in the Whierarchy or a similar parameterized hierarchy.
Several natural parameterized problems have, however, resisted such
a classification. At least in some cases, the reason is that upper
and lower bounds for their parameterized \emph{space} complexity
have recently been obtained that rule out completeness results for
parameterized \emph{time} classes. In this paper, we make progress
in this direction by proving that the associative generability
problem and the longest common subsequence problem are complete for
parameterized space classes. These classes are defined in terms of
different forms of bounded nondeterminism and in terms of
simultaneous timespace bounds. As a technical tool we introduce a
``union operation'' that translates between problems complete for
classical complexity classes and for Wclasses.
 Till Tantau:
Graph Drawing in TikZ.
In Proceedings of Graph Drawing 2012, Band 7704 von Lecture Notes in Computer Science, S. 517528.
Springer,
2013.
Zusammenfassung anzeigen
At the heart of every good graph drawing algorithm lies an efficient procedure for assigning canvas positions to a graph's nodes and the bend points of its edges. However, every realworld implementation of such an algorithm must address numerous problems that have little to do with the actual algorithm, like handling input and output formats, formatting node texts, and styling nodes and edges. We present a new framework, implemented in the Lua programming language and integrated into the TikZ graphics description language, that aims at simplifying the implementation of graph drawing algorithms. Implementers using the framework can focus on the core algorithmic ideas and will automatically profit from the framework's pre and postprocessing steps as well as from the extensive capabilities of the TikZ graphics language and the TeX typesetting engine. Algorithms already implemented using the framework include the ReingoldTilford tree drawing algorithm, a modular version of Sugiyama's layered algorithm, and several forcebased multilevel algorithms.
 Till Tantau:
Graph Drawing in TikZ.
Journal of Graph Algorithms and Applications, 17(4):495513, 2013.
PDF anzeigen  Zusammenfassung anzeigen
At the heart of every good graph drawing algorithm lies an efficient procedure for assigning canvas positions to a graph's nodes and the bend points of its edges. However, every realworld implementation of such an algorithm must address numerous problems that have little to do with the actual algorithm, like handling input and output formats, formatting node texts, and styling nodes and edges. We present a new framework, implemented in the Lua programming language and integrated into the TikZ graphics description language, that aims at simplifying the implementation of graph drawing algorithms. Implementers using the framework can focus on the core algorithmic ideas and will automatically profit from the framework's pre and postprocessing steps as well as from the extensive capabilities of the TikZ graphics language and the TeX typesetting engine. Algorithms already implemented using the framework include the ReingoldTilford tree drawing algorithm, a modular version of Sugiyama's layered algorithm, and several forcebased multilevel algorithms.
 Valentina Damerow, Bodo Manthey, Friedhelm Meyer auf der Heide, Harald Räcke, Christian Scheideler, Christian Sohler, Till Tantau:
Smoothed Analysis of LeftToRight Maxima with Applications.
ACM Transactions on Algorithms, 8(3):Article No. 30, 2012.
Website anzeigen  Zusammenfassung anzeigen
A lefttoright maximum in a sequence of n numbers s1, …, sn is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of lefttoright maxima in the presence of additive random noise. We show that for every sequence of n numbers si ? [0,1] that are perturbed by uniform noise from the interval [?,?], the expected number of lefttoright maxima is ?(&sqrt;n/? + log n) for ?>1/n. For Gaussian noise with standard deviation ? we obtain a bound of O((log3/2 n)/? + log n).
We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of ?(&sqrt;n/? + log n) and ?(n/?+1&sqrt;n/? + n log n), respectively, for uniform random noise from the interval [?,?]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the twodimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axisaligned box enclosing a set of points moving in ddimensional space.
 Michael Elberfeld, Andreas Jakoby, Till Tantau:
Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth.
In Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), Band 14 von Leibniz International Proceedings in Informatics (LIPIcs), S. 6677.
Schloss DagstuhlLeibnizZentrum für Informatik,
2012.
PDF anzeigen  Website anzeigen  Zusammenfassung anzeigen
An algorithmic meta theorem for a logic and a class C of structures states that all problems ex pressible in this logic can be solved efficiently for inputs from C. The prime example is Courcelle’s Theorem, which states that monadic secondorder (mso) definable problems are lineartime solv able on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that msodefinable problems are (a) solvable by uniform constantdepth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmicdepth circuit families (NC1 for decision prob lems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata’s computations algebraically using convolution circuits; and a lemma on computing balanced width3 tree de compositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.
 Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
Theoretical Computer Science, 432:38–51, 2012.
PDF anzeigen  Website anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing, which is often the case in laboratory data, the resulting formal problem IPPH, which stands for incomplete perfect phylogeny haplotyping, is NPcomplete. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NPcomplete; but here, at least, a fixedparameter algorithm is known. Such drastic and ad hoc simplifications turn out to be unnecessary to make IPPH tractable: we present the first theoretical analysis of a parametrized algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. This tractability result is optimal insofar as we prove IPPH to be NPcomplete whenever any of the parameters we consider is not fixed, but part of the input.
 Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space Complexity of Parameterized Problems.
In Proceedings of the 7th International Symposium on Parameterized and Exact Computation (IPEC 2012), Band 7535 von Lecture Notes in Computer Science, S. 206217.
Springer,
2012.
Website anzeigen  Zusammenfassung anzeigen
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of wellstudied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L = NL; which explains why the two problem variants seem to necessitate different algorithmic approaches even though their parameterized time complexity is the same. For a number of further natural parameterized problems, including the longest common subsequence problem and the acceptance problem for multihead automata, we show that they lie in or are complete for different parameterized space classes; which explains why previous attempts at proving completeness of these problems for parameterized time classes have failed.
 Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space Complexity of Parameterized Problems.
Technischer Bericht ,
ECCC,
2012.
Website anzeigen  PDF anzeigen  Zusammenfassung anzeigen
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of wellstudied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L=NL. For a number of further natural parameterized problems, including the longest common subsequence problem, the acceptance problem for multihead automata, and the associative generability problem we show that they lie in or are complete for different parameterized space classes. Our results explain why previous attempts at proving completeness of different problems for parameterized time classes have failed.
 Michael Elberfeld, Till Tantau:
Phylogeny and ParsimonyBased Haplotype Inference with Constraints.
Information and Computation, 213:33–47, 2012.
PDF anzeigen  Website anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. An extension of this approach tries to incorporate prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. The objective is to increase the accuracy of haplotyping methods, but it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NPcomplete. In the paper at hand we present a polynomialtime algorithm for it. Our algorithmic ideas also yield new fixedparameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
 Michael Elberfeld, Martin Grohe, Till Tantau:
Where FirstOrder and Monadic SecondOrder Logic Coincide.
In Proceedings of the 27th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2012), S. 265274.
IEEE Computer Society,
2012.
PDF anzeigen  Website anzeigen  Zusammenfassung anzeigen
We study on which classes of graphs firstorder logic (FO) and monadic secondorder logic (MSO) have the same expressive power. We show that for each class of graphs that is closed under taking subgraphs, FO and MSO have the same expressive power on the class if, and only if, it has bounded tree depth. Tree depth is a graph invariant that measures the similarity of a graph to a star in a similar way that tree width measures the similarity of a graph to a tree. For classes just closed under taking induced subgraphs, we show an analogous result for guarded secondorder logic (GSO), the variant of MSO that not only allows quantification over vertex sets but also over edge sets. A key tool in our proof is a FefermanVaughttype theorem that is constructive and still works for unbounded partitions.
 Michael Elberfeld, Andreas Jakoby, Till Tantau:
Logspace Versions of the Theorems of Bodlaender and Courcelle.
In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), S. 143152.
IEEE Computer Society,
2010.
Website anzeigen  Zusammenfassung anzeigen
Bodlaender's Theorem states that for every k there is a lineartime algorithm that decides whether an input graph has tree width k and, if so, computes a widthk tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic secondorder formula φ and for
every k there is a lineartime algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic secondorder logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
 Michael Elberfeld, Andreas Jakoby, Till Tantau:
Logspace Versions of the Theorems of Bodlaender and Courcelle.
Technischer Bericht ECCCTR10062,
Electronic Colloquium on Computational Complexity,
2010.
PDF anzeigen  Website anzeigen  Zusammenfassung anzeigen
Bodlaender's Theorem states that for every k there is a lineartime algorithm that decides whether an input graph has tree width k and, if so, computes a widthk tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic secondorder formula φ and for
every k there is a lineartime algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic secondorder logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
 Michael Elberfeld, Till Tantau:
Phylogeny and ParsimonyBased Haplotype Inference with Constraints.
Technischer Bericht SIIMTRA1001,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2010.
PDF anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is
the problem of predicting likely haplotypes based on genotype data.
One fast computational haplotyping method is based on an evolution
ary model where a perfect phylogenetic tree is sought that explains the
observed data. In their CPM’09 paper, Fellows et al. studied an exten
sion of this approach that incorporates prior knowledge in the form of
a set of candidate haplotypes from which the right haplotypes must be
chosen. While this approach is attractive to increase the accuracy of
haplotyping methods, it was conjectured that the resulting formal prob
lem constrained perfect phylogeny haplotyping might be NPcomplete. In
the paper at hand we present a polynomialtime algorithm for it. Our
algorithmic ideas also yield new fixedparameter algorithms for related
haplotyping problems based on the maximum parsimony assumption.
 Michael Elberfeld, Till Tantau:
Phylogeny and ParsimonyBased Haplotype Inference with Constraints.
In Proceedings of the 21st Annual Symposium on Combinatorial Pattern Matching (CPM 2010), Band 6129 von Lecture Notes in Computer Science, S. 177189.
Springer,
2010.
Website anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. In their CPM 2009 paper, Fellows et al. studied an extension of this approach that incorporates prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. While this approach may help to increase the accuracy of haplotyping methods, it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NPcomplete. In the present paper we present a polynomialtime algorithm for it. Our algorithmic ideas also yield new fixedparameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
 Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
Theoretical Computer Science, 411(2010):783798, 2010.
Website anzeigen  Zusammenfassung anzeigen
A kking in a directed graph is a node from which each node in the graph can be reached via paths of length at most~k. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of reachability problems and semifeasible sets. In this paper, we study the complexity of recognizing kkings. For each succinctly specified family of tournaments (completely oriented digraphs), the kking problem is easily seen to belong to $\Pi_2^{\mathrm p}$. We prove that the complexity of kingship problems is a rich enough vocabulary to pinpoint every nontrivial manyone degree in $\Pi_2^{\mathrm p}$. That is, we show that for every $k \ge 2$ every set in $\Pi_2^{\mathrm p}$ other than $\emptyset$ and $\Sigma^*$ is equivalent to a kking problem under $\leq_{\mathrm m}^{\mathrm p}$reductions. The equivalence can be instantiated via a simple padding function. Our results can be used to show that the radius problem for arbitrary succinctly represented graphs is $\Sigma_3^{\mathrm p}$complete. In contrast, the diameter problem for arbitrary succinctly represented graphs (or even tournaments) is $\Pi_2^{\mathrm p}$complete.
 Jens Gramm, Arfst Nickelsen, Till Tantau:
FixedParameter Algorithms in Phylogenetics.
In Bioinformatics: Volume I: Data, Sequence Analysis and Evolution, Band 452 von Methods in Molecular Biology, S. 507535.
Springer,
2008.
Zusammenfassung anzeigen
We survey the use of fixedparameter algorithms in
phylogenetics. A central computational problem in
this field is the construction of a likely phylogeny
(genealogical tree) for a set of species based on
observed differences in the phenotype, on
differences in the genotype, or on given partial
phylogenies. Ideally, one would like to construct
socalled perfect phylogenies, which arise from an
elementary evolutionary model, but in practice one
must often be content with phylogenies whose
"distance from perfection" is as small as
possible. The computation of phylogenies also has
applications in seemingly unrelated areas such as
genomic sequencing and finding and understanding
genes. The numerous computational problems arising
in phylogenetics are often NPcomplete, but for many
natural parametrizations they can be solved using
fixedparameter algorithms.
 Bodo Manthey, Till Tantau:
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise.
In Proceedings of MFCS 2008, Band 5162 von Lecture Notes in Computer Science, S. 467478.
Springer,
2008.
Zusammenfassung anzeigen
Binary search trees are a fundamental data structure and their height plays a
key role in the analysis of divideandconquer algorithms like quicksort.
We analyze their smoothed height under additive uniform noise: An
adversary chooses a sequence of~$n$ real numbers in the range
$[0,1]$, each number is individually perturbed by adding a value
drawn uniformly at random from an interval of size~$d$, and the
resulting numbers are inserted into a search tree.
An analysis of the smoothed tree height subject to $n$ and $d$ lies
at the heart of our paper: We prove that the smoothed height of binary search
trees is $\Theta (\sqrt{n/d} + \log n)$, where $d \ge 1/n$ may depend on~$n$.
Our analysis starts with the simpler problem of determining the
smoothed number of lefttoright maxima in a sequence.
We establish matching bounds, namely
once more $\Theta (\sqrt{n/d} + \log n)$. We also apply our findings
to the performance of the quicksort algorithm and prove that the
smoothed number of comparisons made by quicksort is
$\Theta(\frac{n}{d+1} \sqrt{n/d} + n \log n)$.
 Till Tantau:
Complexity of the Undirected Radius and Diameter Problems for Succinctly Represented Graphs.
Technischer Bericht SIIMTRA0803,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
PDF anzeigen  Till Tantau:
Der OneTimePadAlgorithmus.
In Taschenbuch der Algorithmen, Springer,
2008.
Zusammenfassung anzeigen
Ein Beitrag über den OneTimePadAlgorithmus im Rahmen der Algorithmen der Woche während des Jahrs der Informatik 2006.
 Till Tantau:
Generalizations of the HartmanisImmermanSewelson Theorem and Applications to Infinite Subsets of PSelective Sets.
Technischer Bericht ECCCTR08027,
Electronic Colloquium on Computational Complexity,
2008.
PDF anzeigen  Zusammenfassung anzeigen
The HartmanisImmermanSewelson theorem is the classical link between the exponential and the polynomial time realm. It states that NE = E if, and only if, every sparse set in NP lies in P. We establish similar links for classes other than sparse sets: 1. E = UE if, and only if, all functions f: {1}^* to Sigma^* in NPSV_g lie in FP. 2. E = NE if, and only if, all functions f: {1}^* to Sigma^* in NPFewV lie in FP. 3. E = E^NP if, and only if, all functions f: {1}^* to Sigma^* in OptP lie in FP. 4. E = E^NP if, and only if, all standard left cuts in NP lie in P. 5. E = EH if, and only if, PH cap P/poly = P. We apply these results to the immunity of Pselective sets. It is known that they can be biimmune, but not Pi_2^p/1immune. Their immunity is closely related to topToda languages, whose complexity we link to the exponential realm, and also to king languages. We introduce the new notion of superkings, which are characterized in terms of existsforallpredicates rather than forallexistspredicates, and show that king languages cannot be Sigma_2^pimmune. As a consequence, Pselective sets cannot be Sigma_2^/1immune and, if E^NP^NP = E, not even P/1immune.
 Michael Elberfeld, Till Tantau:
Computational Complexity of PerfectPhylogenyRelated Haplotyping Problems.
Technischer Bericht SIIMTRA0802,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
PDF anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. This
problem, which has strong practical applications, can be approached
using both statistical as well as combinatorial methods. While the
most direct combinatorial approach, maximum
parsimony, leads to NPcomplete problems, the perfect phylogeny
model proposed by Gusfield yields a problem, called PPH, that can be
solved in polynomial (even linear) time. Even this may
not be fast enough when the whole genome is studied, leading to the
question of whether parallel algorithms can be used to solve the PPH
problem. In the present paper we answer this question affirmatively,
but we also give lower complexity bounds on its complexity. In
detail, we show that the problem lies in Mod$_2$L, a subclass of the
circuit complexity class NC$^2$, and is hard for
logarithmic space and thus presumably not in NC$^1$. We also
investigate variants of the PPH problem that have been studied in
the literature, like the perfect path phylogeny haplotyping problem
and the combined problem where a perfect phylogeny of maximal parsimony
is sought, and show that some of these variants are TC$^0$complete or
lie in AC$^0$.
 Michael Elberfeld, Till Tantau:
Computational Complexity of PerfectPhylogenyRelated Haplotyping Problems.
In Proceedings of the 33rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2008), Band 5162 von Lecture Notes in Computer Science, S. 299310.
Springer,
2008.
Website anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. This
problem, which has strong practical applications, can be approached
using both statistical as well as combinatorial methods. While the
most direct combinatorial approach, maximum
parsimony, leads to NPcomplete problems, the perfect phylogeny
model proposed by Gusfield yields a problem, called PPH, that can be
solved in polynomial (even linear) time. Even this may
not be fast enough when the whole genome is studied, leading to the
question of whether parallel algorithms can be used to solve the PPH
problem. In the present paper we answer this question affirmatively,
but we also give lower complexity bounds on its complexity. In
detail, we show that the problem lies in Mod$_2$L, a subclass of the
circuit complexity class NC$^2$, and is hard for
logarithmic space and thus presumably not in NC$^1$. We also
investigate variants of the PPH problem that have been studied in
the literature, like the perfect path phylogeny haplotyping problem
and the combined problem where a perfect phylogeny of maximal parsimony
is sought, and show that some of these variants are TC$^0$complete or
lie in AC$^0$.
 Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
Technischer Bericht SIIMTRA0805,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
PDF anzeigen  Zusammenfassung anzeigen
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. One
fast haplotyping method is based on an evolutionary model where a
perfect phylogenetic tree is sought that explains the
observed data. Unfortunately, when data entries are missing, as is
often the case in real laboratory data, the resulting formal problem
IPPH, which stands for incomplete perfect phylogeny
haplotyping, is NPcomplete and no theoretical results are known
concerning its approximability, fixedparameter tractability or
exact algorithms for it. Even radically simplified versions, such as
the restriction to phylogenetic trees consisting of just two directed
paths from a given root, are still NPcomplete, but here, at least,
a fixedparameter algorithm is known. We generalize this algorithm
to arbitrary tree topologies and present the first
theoretical analysis of an algorithm that works on arbitrary
instances of the original IPPH problem. At the same time we
also show that restricting the tree topology does not always make
finding phylogenies easier: while the incomplete directed perfect phylogeny
problem is wellknown to be solvable in polynomial time, we show
that the same problem restricted to path topologies is NPcomplete.
 Jens Gramm, Arfst Nickelsen, Till Tantau:
FixedParameter Algorithms in Phylogenetics.
The Computer Journal, 51(1):79101, 2008.
PDF anzeigen  Zusammenfassung anzeigen
We survey the use of fixedparameter algorithms in
the field of phylogenetics, which is the study of
evolutionary relationships. The central problem in
phylogenetics is the reconstruction of the
evolutionary history of biological species, but its
methods also apply to linguistics, philology, or
architecture. A basic computational problem is the
reconstruction of a likely phylogeny (genealogical
tree) for a set of species based on observed
differences in the phenotype like color or form of
limbs, based on differences in the genotype like
mutated nucleotide positions in the DNA sequence, or
based on given partial phylogenies. Ideally, one
would like to construct socalled perfect
phylogenies, which arise from a very simple
evolutionary model, but in practice one must often
be content with phylogenies whose ``distance from
perfection'' is as small as possible. The
computation of phylogenies has applications in
seemingly unrelated areas such as genomic sequencing
and finding and understanding genes. The numerous
computational problems arising in phylogenetics
often are NPcomplete, but for many natural
parametrizations they can be solved using
fixedparameter algorithms.
 Jens Gramm, Till Nierhoff, Roded Sharan, Till Tantau:
Haplotyping with Missing Data via Perfect Path Phylogenies.
Discrete and Applied Mathematics, 155:788805, 2007.
Zusammenfassung anzeigen
Computational methods for inferring haplotype information from genotype data are used in studying the association between genomic variation and medical condition. Recently, Gusfield proposed a haplotype inference method that is based on perfect phylogeny principles. A fundamental problem arises when one tries to apply this approach in the presence of missing genotype data, which is common in practice. We show that the resulting theoretical problem is NPhard even in very restricted cases. To cope with missing data, we introduce a variant of haplotyping via perfect phylogeny in which a path phylogeny is sought. Searching for perfect path phylogenies is strongly motivated by the characteristics of human genotype data: 70% of real instances that admit a perfect phylogeny also admit a perfect path phylogeny. Our main result is a fixedparameter algorithm for haplotyping with missing data via perfect path phylogenies. We also present a simple lineartime algorithm for the problem on complete data.
 Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
In Proceedings of FCT 2007, Band 4639 von Lecture Notes in Computer Science, S. 328340.
Springer,
2007.
Zusammenfassung anzeigen
A kking in a directed graph is a node from which
each node in the graph can be reached via paths of
length at most~k. Recently, kings have proven
useful in theoretical computer science, in
particular in the study of the complexity of
reachability problems and semifeasible sets. In
this paper, we study the complexity of recognizing
kkings. For each succinctly specified family of
tournaments (completely oriented digraphs), the
kking problem is easily seen to belong to
$\Pi_2^{\mathrm p}$. We prove that the complexity of
kingship problems is a rich enough vocabulary to
pinpoint every nontrivial manyone degree in
$\Pi_2^{\mathrm p}$. That is, we show that for every
$k \ge 2$ every set in $\Pi_2^{\mathrm p}$ other
than $\emptyset$ and $\Sigma^*$ is equivalent to a
kking problem under $\leq_{\mathrm m}^{\mathrm
p}$reductions. The equivalence can be instantiated
via a simple padding function. Our results can be
used to show that the radius problem for arbitrary
succinctly represented graphs is $\Sigma_3^{\mathrm
p}$complete. In contrast, the diameter problem for
arbitrary succinctly represented graphs (or even
tournaments) is $\Pi_2^{\mathrm p}$complete.
 Andreas Jakoby, Till Tantau:
Logspace Algorithms for Computing Shortest and Longest Paths in SeriesParallel Graphs.
In Proceedings of the 27th International Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2007), Band 4855 von Lecture Notes in Computer Science, S. 216227.
Springer,
2007.
Zusammenfassung anzeigen
For many types of graphs, including directed acyclic graphs, undirected graphs, tournament
graphs, and graphs with bounded independence number, the shortest path problem is NLcomplete.
The longest path problem is even NPcomplete for many types of graphs, including undirected K_{5}minorfree graphs and even planar graphs. In the present paper we present logspace algorithms
for computing shortest and longest paths in seriesparallel graphs where the edges can be directed
arbitrarily. The class of seriesparallel graphs that we study can be characterized alternatively as the
class of K_{4}minorfree graphs and also as the class of graphs of treewidth 2. It is wellknown that
for graphs of bounded treewidth many intractable problems can be solved efficiently, but previous
work was focused on finding algorithms with low parallel or sequential time complexity. In contrast,
our results concern the space complexity of shortest and longest path problems. In particular, our
results imply that for graphs of treewidth 2 these problems are Lcomplete.
 Bodo Manthey, Till Tantau:
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise.
Technischer Bericht ECCCTR07039,
Electronic Colloquium on Computational Complexity,
2007.
Website anzeigen  Zusammenfassung anzeigen
Binary search trees are a fundamental data structure and their height plays a key role in the analysis of divideandconquer algorithms like quicksort. Their worstcase height is linear; their average height, whose exact value is one of the beststudied problems in averagecase complexity, is logarithmic. We analyze their smoothed height under additive noise: An adversary chooses a sequence of n real numbers in the range [0,1]; each number is individually perturbed by adding a random value from an interval of size d; and the resulting numbers are inserted into a search tree. The expected height of this tree is called smoothed tree height. If d is very small, namely for d < 1/n, the smoothed tree height is the same as the worstcase height; if d is very large, the smoothed tree height approaches the logarithmic averagecase height. An analysis of what happens between these extremes lies at the heart of our paper: We prove that the smoothed height of binary search trees is $Theta (sqrt{n/d} + log n)$, where d >= 1/n may depend on n. This implies that the logarithmic averagecase height becomes manifest only for $d in Omega (n/log^2 n)$. For the analysis, we first prove that the smoothed number of lefttoright maxima in a sequence is also $Theta (sqrt{n/d} + log n)$. We apply these findings to the performance of the quicksort algorithm, which needs $Theta(n^2)$ comparisons in the worst case and $Theta(n log n)$ on average, and prove that the smoothed number of comparisons made by quicksort is $Theta(n/d+1 sqrt{n/d} + n log n)$. This implies that the averagecase becomes manifest already for $d in Omega(sqrt[3]{n/logsqr n})$.
 Till Tantau:
Logspace Optimization Problems and Their Approximability Properties.
Theory of Computing Systems, 41(2):327350, 2007.
Zusammenfassung anzeigen
Logspace optimization problems are the logspace
analogues of the wellstudied polynomialtime
optimization problems. Similarly to them, logspace
optimization problems can have vastly different
approximation properties, even though the underlying
decision problems have the same computational
complexity. Natural problems  including the
shortest path problems for directed graphs,
undirected graphs, tournaments, and forests 
exhibit such a varying complexity. In order to study
the approximability of logspace optimization
problems in a systematic way, polynomialtime
approximation classes and polynomialtime reductions
between optimization problems are transferred to
logarithmic space. It is proved that natural
problems are complete for different logspace
approximation classes. This is used to show that
under the assumption L \neq NL some logspace
optimization problems cannot be approximated with a
constant ratio; some can be approximated with a
constant ratio, but do not permit a logspace
approximation scheme; and some have a logspace
approximation scheme, but cannot be solved in
logarithmic space.
 Jens Gramm, Tzvika Hartman, Till Nierhoff, Roded Sharan, Till Tantau:
On the Complexity of SNP Block Partitioning Under the Perfect Phylogeny Model.
In Proceedings of Workshop on Algorithms in Bioinformatics (WABI), Band 4175 von Lecture Notes in Computer Science, S. 92102.
Springer,
2006.
Zusammenfassung anzeigen
Recent technologies for typing single nucleotide
polymorphisms (SNPs) across a population are
producing genomewide genotype data for tens of
thousands of SNP sites. The emergence of such large
data sets underscores the importance of algorithms
for largescale haplotyping. Common haplotyping
approaches first partition the SNPs into blocks of
high linkagedisequilibrium, and then infer
haplotypes for each block separately. We investigate
an integrated haplotyping approach where a partition
of the SNPs into a minimum number of noncontiguous
subsets is sought, such that each subset can be
haplotyped under the perfect phylogeny model. We
show that finding an optimum partition is NPhard
even if we are guaranteed that two subsets
suffice. On the positive side, we show that a
variant of the problem, in which each subset is
required to admit a perfect path phylogeny
haplotyping, is solvable in polynomial time.
 Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
Technischer Bericht URCSTR905,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2006.
Website anzeigen  Zusammenfassung anzeigen
A king in a directed graph is a node from which each
node in the graph can be reached via paths of length
at most two. There is a broad literature on
tournaments (completely oriented digraphs), and it
has been known for more than half a century that all
tournaments have at least one king
[Lan53]. Recently, kings have proven useful in
theoretical computer science, in particular in the
study of the complexity of reachability problems
[Tan01,NT05]. and semifeasible sets
[HNP98,HT06,HOZZ06].
In this paper, we study the complexity of
recognizing kings. For each succinctly specified
family of tournaments, the king
problem is already known to belong to $\Pi_2^p$
[HOZZ06]. We prove that the complexity of kingship
problems is a rich enough vocabulary to pinpoint
every nontrivial manyone degree in $\Pi_2^p$. That
is, we show that \emph{every} set in $\Pi_2^p$ other
than $\emptyset$ and $\Sigma^*$ is equivalent to a
king problem under $\leq_m^p$reductions. Indeed, we
show that the equivalence can even be instantiated
via relatively simple padding, and holds even if the
notion of kings is redefined to refer to kkings
(for any fixed $k \geq 2$)nodes from which the
all nodes can be reached via paths of length at most
k.
Using these and related techniques, we obtain a
broad range of additional results about the complexity of king
problems, diameter problems, radius problems, and
initial component problems. It follows easily from
our proof approach that the problem of testing
kingship in succinctly specified graphs (which need
not be tournaments) is $\Pi_2^p$complete. We show
that the radius problem for arbitrary succinctly
represented graphs is $\Sigma_3^p$complete, but
that in contrast the diameter problem for arbitrary
succinctly represented graphs (or even tournaments)
is $\Pi_2^p$complete. Yet again in contrast, we
prove that initial component languages (which ask
whether a given node can reach all other nodes in
its tournament) all fall within $\Pi_2^p$, yet
cannot be $\Pi_2^p$completeor even
NPhardunless P=NP.
 Richard Karp, Till Nierhoff, Till Tantau:
Optimal Flow Distribution Among Multiple Channels with Unknown Capacities.
In Essays in Theoretical Computer Science in Memory of Shimon Even, Band 3895 von Lecture Notes in Computer Science  Festschriften, S. 111128.
Springer,
2006.
Zusammenfassung anzeigen
Consider a simple network flow problem in which a
flow of value D must be split among n
channels directed from a source to a sink. The
initially unknown channel capacities can be probed
by attempting to send a flow of at most D
units through the network. If the flow is not
feasible, we are told on which channels the capacity
was exceeded (binary feedback) and possibly also how
many units of flow were successfully sent on these
channels (throughput feedback). For throughput
feedback we present optimal protocols for minimizing
the number of rounds needed to find a feasible flow
and for minimizing the total amount of wasted
flow. For binary feedback we present an
asymptotically optimal protocol.
 Till Tantau:
The Descriptive Complexity of the Reachability Problem As a Function of Different Graph Parameters.
Technischer Bericht ECCCTR06035,
Electronic Colloquium on Computational Complexity,
2006.
Website anzeigen  Zusammenfassung anzeigen
The reachability problem for graphs cannot be
described, in the sense of descriptive complexity
theory, using a single firstorder formula. This is
true both for directed and undirected graphs, both
in the finite and infinite. However, if we restrict
ourselves to graphs in which a certain graph
parameter is fixed to a certain value, firstorder
formulas often suffice. A trivial example are graphs
whose number of vertices is fixed to n. In
such graphs reachability can be described using a
firstorder formula with a quantifier nesting depth
of log_{2} n, which is both a lower
and an upper bound. In this paper we investigate how
the descriptive complexity of the reachability
problem varies as a function of graph parameters
such as the size of the graph, the clique number,
the matching number, the independence number or the
domination number. The independence number turns out
to be the by far most challenging graph parameter.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Contextfree Languages Can Be Accepted With Absolutely No Space Overhead.
Information and Computation, 203(2):163180, 2005.
Zusammenfassung anzeigen
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that
all contextfree languages can be accepted
nondeterministically in polynomial time with
absolutely no space overhead, and that all
deterministic contextfree languages can be accepted
deterministically in polynomial time with absolutely
no space overhead.
 Richard Karp, Till Nierhoff, Till Tantau:
Optimal Flow Distribution Among Multiple Channels with Unknown Capacities.
In Proceedings of the 2nd Brazilian Symposium on Graphs, Algorithms and Combinatorics (GRACO 2005), Band 19 von Electronic Notes in Discrete Mathematics, S. 225231.
Springer,
2005.
Zusammenfassung anzeigen
Consider a simple network flow problem in which
there are n channels directed from a source
to a sink. The channel capacities are unknown and we
wish to determine a feasible network flow of value
D. Flow problems with unknown capacities
arise naturally in numerous applications, including
interdomain traffic routing in the Internet
[Chandrayana et al. 2004], bandwidth allocation for
sending files in peertopeer networks, and the
distribution of physical goods like newspapers among
different points of sale. We study protocols that
probe the network by attempting to send a flow of at
most D units through the network. If the flow
is not feasible, the protocol is told on which
channels the capacity was exceeded (binary feedback)
and possibly also how many units of flow were
successfully send on these channels (throughput
feedback). For the latter, more informative, type of
feedback we present optimal protocols for minimizing
the number of rounds needed to find a feasible flow
and for minimizing the total amount of wasted
flow. For binary feedback, we show that one can
exploit the fact that network capacities are often
larger than the demand D: We present a
protocol for this situation that is asymptotically
optimal and finds a solution more quickly than the
generalized binary search protocol previously
proposed in the literature [Chandrayana et
al. 2004]. For the special case of two channels we
present a protocol that is optimal and outperforms
binary search.
 Arfst Nickelsen, Till Tantau:
The Complexity of Finding Paths in Graphs with Bounded Independence Number.
SIAM Journal on Computing, 34(5):11761195, 2005.
Zusammenfassung anzeigen
We study the problem of finding a path between two
vertices in finite directed graphs whose
independence number is bounded by some constant
k. The independence number of a graph is the
largest number of vertices that can be picked such
that there is no edge between any two of them. The
complexity of this problem depends on the exact
question we ask: Do we only wish to tell whether a
path exists? Do we also wish to construct such a
path? Are we required to construct the shortest one?
Concerning the first question, we show that the
reachability problem is firstorder definable for
all k and that its succinct version is
Pi_{2}^{P}complete for all
k. In contrast, the reachability problems for
many other types of finite graphs, including dags
and trees, are not firstorder definable and their
succinct versions are PSPACEcomplete. Concerning
the second question, we show that not only can we
construct paths in logarithmic space, there even
exists a logspace approximation scheme for this
problem. The scheme gets a ratio r ≥ 1 as
additional input and outputs a path that is at most
r times as long as the shortest
path. Concerning the third question, we show that
even telling whether the shortest path has a certain
length is NLcomplete and thus as difficult as for
arbitrary directed graphs.
 Till Tantau:
Logspace Optimization Problems and Their Approximability Properties.
In Proceedings of the 15th International Symposium on Fundamentals of Computation Theory (FCT 2005), Band 3623 von Lecture Notes in Computer Science, S. 92103.
Springer,
2005.
Zusammenfassung anzeigen
This paper introduces logspace optimization problems
as analogues of the wellstudied polynomialtime
optimization problems. Similarly to them, logspace
optimization problems can have vastly different
approximation properties, even though the underlying
decision problems have the same computational
complexity. Natural problems, including the shortest
path problems for directed graphs, undirected
graphs, tournaments, and forests, exhibit such a
varying complexity. In order to study the
approximability of logspace optimization problems in
a systematic way, polynomialtime approximation
classes are transferred to logarithmic
space. Appropriate reductions are defined and
optimization problems are presented that are
complete for these classes. It is shown that under
the assumption L neq NL some logspace optimization
problems cannot be approximated with a constant
ratio; some can be approximated with a constant
ratio, but do not permit a logspace approximation
scheme; and some have a logspace approximation
scheme, but cannot be solved in logarithmic space. A
new natural NLcomplete problem is presented that
has a logspace approximation scheme.
 Till Tantau:
Weak Cardinality Theorems.
Journal of Symbolic Logic, 70(3):861878, 2005.
Zusammenfassung anzeigen
Kummer's Cardinality Theorem states that a language A must be
recursive if a Turing machine can exclude for any n words
w1, ..., wn one of the n +
1 possibilities for the cardinality of {w1, ...,
wn} \cap A. There was good reason to believe
that this theorem is a peculiarity of recursion theory: neither the
Cardinality Theorem nor weak forms of it hold for classical resourcebounded
computational models like polynomial time. This belief may be flawed. In this
paper it is shown that Weak Cardinality Theorems hold for finite automata and
also for other models. A mathematical explanation is given as to ``why''
recursiontheoretic and automatatheoretic Weak Cardinality Theorems hold,
but not corresponding `middleground theorems': the recursion and
automatatheoretic Weak Cardinality Theorems are instantiations of purely
logical Weak Cardinality Theorems. The logical theorems can be instantiated
for logical structures characterizing recursive computations and finite
automata computations. A corresponding structure characterizing polynomial
time does not exist.
 Jens Gramm, Till Nierhoff, Roded Sharan, Till Tantau:
On the Complexity of Haplotyping via Perfect Phylogeny.
In Proceedings of Second RECOMB Satellite Workshop on Computational Methods for SNPs and Haplotypes, S. 3546.
,
2004.
Zusammenfassung anzeigen
The problem of haplotyping via perfect phylogeny has
received a lot of attention lately due to its
applicability to real haplotyping problems and its
theoretical elegance. However, two main research
issues remained open: The complexity of haplotyping
with missing data, and whether the problem is
lineartime solvable. In this paper we settle the
first question and make progress toward answering
the second one. Specifically, we prove that Perfect
Phylogeny Haplotyping with missing data is
NPcomplete even when the phylogeny is a path and
only one allele of every polymorphic site is present
in the population in its homozygous state. Our
result implies the hardness of several variants of
the missing data problem, including the general
Perfect Phylogeny Haplotyping Problem with missing
data, and Hypergraph Tree Realization with missing
data. On the positive side, we give a lineartime
algorithm for Perfect Phylogeny Haplotyping when the
phylogeny is a path. This variant is important due
to the abundance of yinyang haplotypes in the human
genome. Our algorithm relies on a reduction of the
problem to that of deciding whether a partially
ordered set has width~2.
 Jens Gramm, Till Nierhoff, Till Tantau:
Perfect Path Phylogeny Haplotyping with Missing Data is FixedParameter Tractable.
In Proceedings of the 2004 International Workshop on Parameterized and Exact Computation, Band 3162 von Lecture Notes in Computer Science, S. 174186.
Springer,
2004.
Zusammenfassung anzeigen
Haplotyping via perfect phylogeny is a method for
retrieving haplotypes from genotypes. Fast
algorithms are known for computing perfect
phylogenies from complete and errorfree input
instancesthese instances can be organized as a
genotype matrix whose rows are the genotypes and
whose columns are the single nucleotide
polymorphisms under consideration. Unfortunately, in
the more realistic setting of missing entries in the
genotype matrix, even restricted forms of the
perfect phylogeny haplotyping problem become
NPhard. We show that haplotyping via perfect
phylogeny with missing data becomes computationally
tractable when imposing additional biologically
motivated constraints. Firstly, we focus on asking
for perfect phylogenies that are paths, which is
motivated by the discovery that yinyang haplotypes
span large parts of the human genome. A yinyang
haplotype implies that every corresponding perfect
phylogeny <i>has</i> to be a path. Secondly, we
assume that the number of missing entries in every
column of the input genotype matrix is bounded. We
show that the perfect path phylogeny haplotyping
problem is fixedparameter tractable when we
consider the maximum number of missing entries per
column of the genotype matrix as parameter. The
restrictions we impose are met by a majority of the
problem instances encountered in publicly available
human genome data.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
OverheadFree Computation, DCFLs, and CFLs.
Technischer Bericht URCSTR2004844,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2004.
Website anzeigen  Zusammenfassung anzeigen
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that
all contextfree languages can be accepted
nondeterministically in polynomial time with
absolutely no space overhead, and that all
deterministic contextfree languages can be accepted
deterministically in polynomial time with absolutely
no space overhead.
 Arfst Nickelsen, Till Tantau, Lorenz Weizäcker:
Aggregates with Component Size One Characterize Polynomial Space.
Technischer Bericht ECCCTR04028,
Electronic Colloquium on Computational Complexity,
2004.
Website anzeigen  Zusammenfassung anzeigen
Aggregates are a computational model similar to
circuits, but the underlying graph is not
necessarily acyclic. Logspaceuniform
polynomialsize aggregates decide exactly the
languages in PSPACE; without uniformity condition
they decide the languages in PSPACE/poly. As a
measure of similarity to boolean circuits we
introduce the parameter component size. We prove
that already aggregates of component size 1 are
powerful enough to capture polynomial space. The
only type of cyclic components needed to make
polynomialsize circuits as powerful as
polynomialsize aggregates are binary xorgates
whose output is fed back to the gate as one of the
inputs.
 Mitsunori Ogihara, Till Tantau:
On the Reducibility of Sets Inside NP to Sets with Low Information Content.
Journal of Computer and System Sciences, 69(4):299324, 2004.
Zusammenfassung anzeigen
This paper studies for various natural problems in
NP whether they can be reduced to sets with low
information content, such as branches, Pselective
sets, and membership comparable sets. The problems
that are studied include the satisfiability problem,
the graph automorphism problem, the undirected graph
accessibility problem, the determinant function, and
all logspace selfreducible languages. Some of these
are complete for complexity classes within NP, but
for others an exact complexitytheoretic
characterization is not known. Reducibility of these
problems is studied in a general framework
introduced in this paper: proververifier protocols
with lowcomplexity provers. It is shown that all
these natural problems indeed have such
protocols. This fact is used to show, for certain
reduction types, that these problems are not
reducible to sets with low information content
unless their complexity is much less than what it is
currently believed to be. The general framework is
also used to obtain a new characterization of the
complexity class L: L is the class of all logspace
selfreducible sets in L^{Lsel}.
 Till Tantau:
Comparing Verboseness for Finite Automata and Turing Machines.
Theory of Computing Systems, 37(1):95109, 2004.
Zusammenfassung anzeigen
A language is called (m,n)verbose if
there exists a Turing machine that enumerates for
any n words at most m possibilities
for their characteristic string. We compare this
notion to (m,n)faverboseness, where
instead of a Turing machine a finite automaton is
used. Using a new structural diagonalisation
method, where finite automata trick Turing machines,
we prove that all (m,n)verbose
languages are (h,k)verbose, iff all
(m,n)faverbose languages are
(h,k)faverbose. In other words,
Turing machines and finite automata behave in
exactly the same way with respect to inclusion of
verboseness classes. This identical behaviour
implies that the Nonspeedup Theorem also holds for
finite automata. As an application of the
theoretical framework, we prove a lower bound on the
number of bits needed to be communicated to finite
automata protocol checkers for nonregular
protocols.
 Till Tantau:
Strahlende Präsentationen in LaTeX.
Die TeXnische Komödie, 2/04:5480, 2004.
In German.
Website anzeigen  Zusammenfassung anzeigen
Die LaTeXKlasse beamer dient dem Erstellen von
Prsentationen mittels LaTeX, pdfLaTeX oder LyX. Wie
der Name der Klasse schon verrät, will beamer
es insbesondere leicht machen, Beamervorträge
zu erzeugen; klassische Folienvorträge werden
aber genauso unterstützt. Soweit möglich
baut beamer auf dem Benutzer vertrauten
LaTeXBefehlen auf, wie \section zur Erzeugung von
Abschnitten oder enumerate zur Erzeugung von
Aufzählungen. Beamervorträge profitieren
oft von der Verwendung von Overlays und deren
Erstellung wird durch eine einfache, aber
mächtige Syntaxerweiterung erleichtert. Die
Klasse folgt der LaTeXPhilosophie der Trennung von
Form und Inhalt: Aus einer einzigen Quelle lassen
sich durch Variation von LayoutParametern
verschieden gestaltete Vorträge
erzeugen. Vorgefertigte Layouts reichen von
klassischsachlich bis modischeffektvoll, folgen
aber immer dem Bauhausgrundsatz "form follows
function".
 Till Tantau:
Über strukturelle Gemeinsamkeiten der Aufzählbarkeitsklassen von Turingmaschinen und endlichen Automaten.
In Ausgezeichnete Informatikdissertationen 2003, Lecture Notes in Informatics, S. 189198.
Springer,
2004.
Zusammenfassung anzeigen
Der Beitrag enthält eine Zusammenfassung der
Dissertation On Structural Similarities of Finite
Automata and Turing Machine Enumerability
Classes.
 Till Tantau:
A Logspace Approximation Scheme for the Shortest Path Problem for Graphs with Bounded Independence Number.
In Proceedings of the 21st International Symposium on Theoretical Aspects of Computer Science (STACS 2004), Band 2996 von Lecture Notes in Computer Science, S. 326337.
Springer,
2004.
PDF anzeigen  Zusammenfassung anzeigen
How difficult is it to find a path between two
vertices in finite directed graphs whose
independence number is bounded by some constant
k? The independence number of a graph is the
largest number of vertices that can be picked such
that there is no edge between any two of them. The
complexity of this problem depends on the exact
question we ask: Do we only wish to tell whether a
path exists? Do we also wish to construct such a
path? Are we required to construct the shortest
path? Concerning the first question, it is known
that the reachability problem is firstorder
definable for all k. In contrast, the
corresponding reachability problems for many other
types of finite graphs, including dags and trees,
are not firstorder definable. Concerning the second
question, in this paper it is shown that we cannot
only construct paths in logarithmic space, but that
there even exists a logspace approximation scheme
for constructing them. In contrast, for directed
graphs, undirected graphs, and dags we cannot
construct paths in logarithmic space (let alone
approximate the shortest one), unless complexity
class collapses occur. Concerning the third
question, it is shown that even telling whether the
shortest path has a certain length is NLcomplete
and thus as difficult as for arbitrary directed
graphs.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Computation with Absolutely No Space Overhead.
In Proceedings of the Seventh International Conference on Developments in Language Theory (DLT 2003), Band 2710 von Lecture Notes in Computer Science, S. 325336.
Springer,
2003.
Website anzeigen  Zusammenfassung anzeigen
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that
subclasses of the contextfree languages can even be
accepted in polynomial time with absolutely no space
overhead.
 Arfst Nickelsen, Till Tantau:
Partial Information Classes.
SIGACT News, 34(1):3246, 2003.
Website anzeigen  Zusammenfassung anzeigen
In this survey we present partial information
classes, which have been studied under different
names and in different contexts in the
literature. They are defined in terms of partial
information algorithms. Such algorithms take a word
tuple as input and yield a small set of
possibilities for its characteristic string as
output. We define a unified framework for the study
of partial information classes and show how previous
notions fit into the framework. The framework allows
us to study the relationship of a large variety of
partial information classes in a uniform way. We
survey how partial information classes are related
to other complexity theoretic notions like advice
classes, lowness, biimmunity, NPcompleteness, and
decidability.
 Till Tantau:
Logspace Optimisation Problems and Their Approximation Properties.
Technischer Bericht ECCCTR03077,
Electronic Colloquium on Computational Complexity,
2003.
Website anzeigen  Zusammenfassung anzeigen
This paper introduces logspace optimisation problems
as an analogue of the widely studied polynomialtime
optimisation problems. Similarly to the
polynomialtime setting, logspace optimisation
problems can have vastly different approximation
properties, even though the underlying decision
problems have the same computational complexity. In
order to study the approximability of logspace
optimisation problems in a systematic way,
polynomialtime approximation classes are
transferred to logarithmic space. Appropriate
reductions are defined and optimisation problems are
presented that are complete for these classes. It is
shown that under the assumption L != NL some natural
logspace optimisation problems cannot be
approximated with a constant ratio; some can be
approximated with a constant ratio, but do not
permit a logspace approximation scheme; and some
have a logspace approximation scheme, but cannot be
solved in logarithmic space. An example of a problem
of the latter type is the problem of finding the
shortest path between two vertices of a tournament.
 Till Tantau:
On Structural Similarities of Finite Automata and Turing Machine Enumerability Classes.
Wissenschaft und Technik Verlag,
2003.
Gutachter: Dirk Siefkes, Johannes Köbler.
Disseration zum Dr. rer. nat., Technische Universität Berlin
Zusammenfassung anzeigen
Die Komplexität von Funktionen, die Worte auf
Worte abbilden, kann auf verschiedene Arten gemessen
werden. Bekannte Maße sind Zeit und
Platzkomplexität. <i>Aufzählbarkeit</i>
ist ein weiteres Komplexitätsmaß. Sie
wird in der Rekursionstheorie eingesetzt, wo sie
eine zentrale Rolle in der Theorie
fragenbeschränkter Reduktionen spielt, sowie in
der ressourcebeschränkten
Komplexitätstheorie, insbesondere in Verbindung
mit nichtuniformen Berechnungen. In dieser Arbeit
wird das Konzept der Aufzählbarkeit auf
endliche Automaten übertragen. Es wird gezeigt,
dass sich Aufzählbarkeit in der
Rekursionstheorie und in der Automatentheorie
gleichartig verhält, in der
Komplexitätstheorie hingegen andersartig. </p>
<p> Die Aufzählbarkeit einer Funktion <i>f</i>
ist die kleinste Zahl <i>m</i>, für die ein
<i>m</i>Aufzähler für <i>f</i>
existiert. Ein <i>mAufzähler</i> ist eine
Maschine, die bei Eingabe eines Wortes <i>w</i> eine
Menge von höchstens <i>m</i> Möglichkeiten
für <i>f</i>(<i>w</i>) ausgibt. Verschiedene
Klassen können durch Veränderung des
Parameters <i>m</i> und der Art der erlaubten
Maschinen definiert werden. In der Rekursionstheorie
erlaubt man beliebige Turingmaschinen als
Aufzähler, in der Automatentheorie lediglich
endliche Automaten. Ein tief liegendes strukturelles
Resultat, das sowohl für Turingmaschinen als
auch für endliche Automaten gilt, ist der
folgende <i>Kreuzproduktsatz</i>: Ist <i>f ×
g</i> eine (<i>n</i> + <i>m</i>)aufzählbare
Funktion, so ist entweder <i>f</i> eine
<i>n</i>aufzählbare oder <i>g</i> eine
<i>m</i>aufzählbare Funktion. Dieser Satz gilt
nicht im Polynomialzeitfall. </p> <p>
Aufzählbarkeit kann auch benutzt werden, um die
Komplexität von <i>Sprachen</i> zu
quantifizieren. Dazu wird gefragt, wie schwierig es
ist, die <i>n</i>fache charakteristische Funktion
\chi<i><sub>A</sub><sup>n</sup></i> und die
<i>n</i>fache Kardinalitätsfunktion
#<i><sub>A</sub><sup>n</sup></i> einer Sprache
<i>A</i> aufzuzählen. Eine Sprache <i>A</i> ist
(<i>m</i>, <i>n</i>)<i>verbose</i>, falls
\chi<i><sub>A</sub><sup>n</sup></i>
<i>m</i>aufzählbar ist. Die
Inklusionsstrukturen der Verbosenessklassen von
Turingmaschinen und der von endlichen Automaten sind
\hbox{gleich}: alle (<i>m</i>,
<i>n</i>)Turingverbosen Sprachen sind genau dann
auch (<i>h</i>, <i>k</i>)Turingverbose, wenn alle
in Bezug auf endliche Automaten (<i>m</i>,
<i>n</i>)verbosen Sprachen auch (<i>h</i>,
<i>k</i>)verbose sind. Dies gilt nicht im
Polynomialzeitfall. </p> <p> Die Aufzählbarkeit
von #<i><sub>A</sub><sup>n</sup></i> ist in der
Rekursionstheorie wohluntersucht. Kummers
Kardinalitätssatz besagt, dass <i>A</i>
rekursiv ist, falls #<i><sub>A</sub><sup>n</sup></i>
von einer Turingmaschine <i>n</i>aufzählbar
ist. Vermutlich gilt dieser Satz auch für
endliche Automaten: zumindest der Nonspeedupsatz,
der Kardinalitätssatz für zwei Worte und
der eingeschränkte Kardinalitätssatz
gelten für endliche Automaten. Der
Kardinalitätssatz gilt nicht im
Polynomialzeitfall. </p> <p> Die Hauptbeweise in
dieser Arbeit benutzen zwei Techniken, deren Einsatz
auch in anderen Gebieten vielversprechend erscheint:
generische Beweise und
Zweigdiagonalisierung. Generische Beweise benutzen
elementare Definitionen, ein Konzept aus der Logik,
um Aufzähler zu definieren. Solche Beweise
lassen sich auf alle Berechnungsmodelle anwenden,
die unter elementaren Definitionen abgeschlossen
sind. Dies ist für endliche Automaten der Fall,
aber auch für die PresburgerArithmetik und die
Ordinalzahlarithmetik. Die zweite Technik ist eine
neue Diagonalisierungsmethode, bei der Maschinen auf
dem Kode der bisherigen Folge von
Diagonalisierungsentscheidungen ausgetrickst werden
und nicht auf ihrem eigenen
Kode. Zweigdiagonalisierung ist nicht universell
einsetzbar, aber wo sie eingesetzt werden kann, kann
man mit ihrer Hilfe gegen Turingmaschinen mittels
endlicher Automaten diagonalisieren. </p> <p> Die
Resultate über Aufzählbarkeitsklassen
haben auch in anderen Gebieten Anwendungen, so bei
Protokolltests mittels endlicher Automaten, bei
Klassifikationsproblemen mit Beispielen und bei
Trennbarkeitsfragen. Ein schönes Beispiel einer
solchen Anwendung lautet wie folgt: Existieren
regulär Obermengen von <i>A</i> ×
<i>A</i>, <i>A</i> × \bar <i>A</i>, und \bar
<i>A</i> × \bar <i>A</i>, deren Schnitt leer
ist, so ist <i>A</i> regulär.
 Till Tantau:
Query Complexity of Membership Comparable Sets.
Theoretical Computer Science, 302(13):467474, 2003.
Website anzeigen  Zusammenfassung anzeigen
This paper investigates how many queries to
kmembership comparable sets are needed in
order to decide all (k+1)membership
comparable sets. For k >= 2 this query
complexity is at least linear and at most cubic. As
a corollary we obtain that more languages are
O(log n)membership comparable than
truthtable reducible to Pselective sets.
 Till Tantau:
Weak Cardinality Theorems for FirstOrder Logic.
Technischer Bericht ECCCTR03024,
Electronic Colloquium on Computational Complexity,
2003.
Website anzeigen  Zusammenfassung anzeigen
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n + 1 possibilities
for the number of words in the language. It is known
that this theorem does not hold for polynomialtime
computations, but there is evidence that it holds
for finite automata: at least weak cardinality
theorems hold for finite automata. This paper shows
that some of the recursiontheoretic and
automatatheoretic weak cardinality theorems are
instantiations of purely logical theorems. Apart
from unifying previous results in a single
framework, the logical approach allows us to prove
new theorems for other computational models. For
example, weak cardinality theorems hold for
Presburger arithmetic.
 Till Tantau:
Weak Cardinality Theorems for FirstOrder Logic.
In Proceedings of the 14th International Symposium on Fundamentals of Computation Theory (FCT 2003), Band 2751 von Lecture Notes in Computer Science, S. 400411.
Springer,
2003.
Website anzeigen  Zusammenfassung anzeigen
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n + 1 possibilities
for the number of words in the language. It is known
that this theorem does not hold for polynomialtime
computations, but there is evidence that it holds
for finite automata: at least weak cardinality
theorems hold for finite automata. This paper shows
that some of the recursiontheoretic and
automatatheoretic weak cardinality theorems are
instantiations of purely logical theorems. Apart
from unifying previous results in a single
framework, the logical approach allows us to prove
new theorems for other computational models. For
example, weak cardinality theorems hold for
Presburger arithmetic.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Computation with Absolutely No Overhead.
Technischer Bericht URCSTR2002779,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2002.
Website anzeigen  Zusammenfassung anzeigen
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that a
large subclasses of the contextfree languages can
even be accepted in polynomial time with absolutely
no space overhead.
 Arfst Nickelsen, Till Tantau:
On Reachability in Graphs with Bounded Independence Number.
In Proceedings of the Eighth Annual International Computing and Combinatorics Conference (COCOON 2002), Band 2387 von Lecture Notes in Computer Science, S. 554563.
Springer,
2002.
Zusammenfassung anzeigen
We study the reachability problem for finite
directed graphs whose independence number is bounded
by some constant k. We show that this problem
is firstorder definable for all k. In
contrast, the reachability problem for many other
types of finite graphs, including dags and trees, is
not first order definable. We also study the
reachability problem for succinctly represented
graphs with independence number at most k. We
also study the succinct version of the problem and
show that it is \Pi^{P}_{2}complete
for all k.
 Mitsunori Ogihara, Till Tantau:
On the Reducibility of Sets Inside NP to Sets with Low Information Content.
Technischer Bericht URCSTR2002782,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2002.
Website anzeigen  Zusammenfassung anzeigen
We study whether sets inside NP can be reduced to sets with low information content but possibly still high computational complexity. Examples of sets with low information content are tally sets, sparse sets, Pselective sets and membership comparable sets. For the graph automorphism and isomorphism problems GA and GI, for the directed graph reachability problem GAP, for the determinant function det, and for logspace selfreducible languages we establish the following results:

If GA is \le^{p}_{tt}reducible to a
Pselective set, then GA \in P.

If GI is O(log)membership comparable, then GI \in RP.

If GAP is logspace O(1)membership comparable, then GAP \in L.

If det is \le^{log}_{T}reducible to an Lselective set, then det \in FL.

If A is logspace selfreducible and
\le^{log}_{T}reducible to an
Lselective set, then A \in L.
The last result is a strong logspace version of the characterisation of P as the class of selfreducible Pselective languages. As P and NL have logspace selfreducible complete sets, it also establishes a logspace analogue of the conjecture that if SAT is \le
^{p}_{T}reducible to a Pselective set, then SAT \in P.
 Till Tantau:
A Note on the Power of Extra Queries to Membership Comparable Sets.
Technischer Bericht ECCCTR02044,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2002.
Website anzeigen  Zusammenfassung anzeigen
A language is called kmembership comparable
if there exists a polynomialtime algorithm that
excludes for any k words one of the
2^{k} possibilities for their
characteristic string. It is known that all
membership comparable languages can be reduced to
some Pselective language with polynomially many
adaptive queries. We show however that for all
k there exists a (k+1)membership
comparable set that is neither truthtable reducible
nor sublinear Turing reducible to any
kmembership comparable set. In particular,
for all k > 2 the number of adaptive queries
to Pselective sets necessary to decide all
kmembership comparable sets is
Omega(n) and O(n^{3}). As this
shows that the truthtable closure of Psel is a
proper subset of Pmc(log), we get a proof of
Sivakumar's conjecture that O(log)membership
comparability is a more general notion than
truthtable reducibility to Psel.
 Till Tantau:
Comparing Verboseness for Finite Automata and Turing Machines.
In Proceedings of the 19th International Symposium on Theoretical Aspects of Computer Science (STACS 2002), Band 2285 von Lecture Notes in Computer Science, S. 465476.
Springer,
2002.
Website anzeigen  Zusammenfassung anzeigen
A language is called (m,n)verbose if
there exists a Turing machine that enumerates for
any n words at most m possibilities
for their characteristic string. We compare this
notion to (m,n)faverboseness, where
instead of a Turing machine a finite automaton is
used. Using a new structural diagonalisation
method, where finite automata trick Turing machines,
we prove that all (m,n)verbose
languages are (h,k)verbose, iff all
(m,n)faverbose languages are
(h,k)faverbose. In other words,
Turing machines and finite automata behave in
exactly the same way with respect to inclusion of
verboseness classes. This identical behaviour
implies that the Nonspeedup Theorem also holds for
finite automata. As an application of the
theoretical framework, we prove a lower bound on the
number of bits needed to be communicated to finite
automata protocol checkers for nonregular
protocols.
 Till Tantau:
Towards a Cardinality Theorem for Finite Automata.
In Proceedings of the 27th International Symposium on Mathematical Foundations of Computer Science (MFCS 2002), Band 2420 von Lecture Notes in Computer Science, S. 625636.
Springer,
2002.
Website anzeigen  Zusammenfassung anzeigen
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n+1 possibilities
for the number of words in the language. This paper
gathers evidence that the cardinality theorem might
also hold for finite automata. Three reasons are
given. First, Beigel's nonspeedup theorem also holds
for finite automata. Second, the cardinality theorem
for finite automata holds for n=2. Third, the
restricted cardinality theorem for finite automata
holds for all n.
 Arfst Nickelsen, Till Tantau:
Closure of Polynomial Time Partial Information Classes under Polynomial Time Reductions.
In Proceedings of the 13th International Symposium on Fundamentals of Computation Theory (FCT 2001), Band 2138 von Lecture Notes in Computer Science, S. 299310.
Springer,
2001.
Website anzeigen  Zusammenfassung anzeigen
Polynomial time partial information classes are
extensions of the class P of languages decidable in
polynomial time. A partial information algorithm for
a language A computes, for fixed n, on
input of words x_{1}, ...,
x_{n} a set P of bitstrings,
called a pool, such that
\chi_{A}(x_{1}, ...,
x_{n}) \in P, where P
is chosen from a family D of pools. A
language A is in P[D], if there is a
polynomial time partial information algorithm which
for all inputs (x_{1}, ...,
x_{n}) outputs a pool P in
D with
\chi_{A}(x_{1}, ...,
x_{n}) \in P. Many extensions
of P studied in the literature, including
approximable languages, cheatability, pselectivity
and frequency computations, form a class P[D]
for an appropriate family D. We characterise
those families D for which P[D] is
closed under certain polynomial time reductions,
namely bounded truthtable, truthtable, and Turing
reductions. We also treat positive reductions. A
class P[D] is presented which strictly
contains the class Psel of pselective languages
and is closed under positive truthtable
reductions.
 Till Tantau:
A Note on the Complexity of the Reachability Problem for Tournaments.
Technischer Bericht ECCCTR01092,
Electronic Colloquium on Computational Complexity,
2001.
Website anzeigen  Zusammenfassung anzeigen
Deciding whether a vertex in a graph is reachable
from another vertex has been studied intensively in
complexity theory and is well understood. For common
types of graphs like directed graphs, undirected
graphs, dags or trees it takes a (possibly
nondeterministic) logspace machine to decide the
reachability problem, and the succinct versions of
these problems (which often arise in hardware
design) are all PSPACEcomplete. In this paper we
study tournaments, which are directed graphs with
exactly one edge between any two vertices. We show
that the tournament reachability problem is first
order definable and that its succinct version is
\Pi^{P}_{2}complete.
 Till Tantau:
Schnelle Löser für periodische Integralgleichungen mittels Matrixkompression und Mehrgitteriteration.
Technische Universität Berlin,
2001.
Diplomarbeit Mathematik
Zusammenfassung anzeigen
Diese Arbeit untersucht schnelle Löser
für eine Klasse von periodischen
Integralgleichungen. Solche Integralgleichungen
treten in einer Reihe von Problemstellungen auf, so
zum Beispiel beim Lösen dirichletscher Probleme
oder der biharmonischen Gleichung. Die in den
betrachteten Integralgleichungen vorkommenden
Integrale sind alle von der gleichen Bauart: Es
handelt sich um Faltungsintegrale, bei denen ein
glatter Störterm berücksichtigt
wird. Stellt man geeignete Forderungen an die
Fourierkoeffizienten der Faltungskerne, so lassen
sich die betrachteten Gleichungen eindeutig
lösen. Für diesen Fall werden Verfahren
zur schnellen Berechnung der Lösung
untersucht. Hierbei bedeutet »schnell«,
dass der Rechenaufwand proportional zum Aufwand
einer schnellen Fouriertransformation ist. Unter
Einsatz einer parallelen Architektur kann bei gleich
bleibendem Arbeitsaufwand sogar eine logarithmische
Rechenzeit erreicht werden. Die Berechnung der
Lösung verläuft in drei Schritten. Im
ersten Schritt werden die Eingabefunktionen
diskretisiert. Als zweites wird ein spektrales
Galerkinverfahren auf die diskretisierte Gleichung
angewandt. Dabei entstehen große Systemmatrizen, die
sich aber komprimieren lassen. Die Matrixkompression
erlaubt es im dritten Schritt, die gesuchte
Lösung durch eine Mehrgitteriteration
asymptotisch optimal zu approximieren.