 Till Tantau:
On the Power of Extra Queries to Selective Languages.
Technical report ECCCTR00077,
Electronic Colloquium on Computational Complexity,
2000.
Go to website  Show abstract
A language is
selective if there exists a
selection algorithm for it. Such an algorithm
selects from any two words one, which is an element
of the language whenever at least one of them
is. Restricting the complexity of selection
algorithms yields different
selectivity
classes like the Pselective or the
semirecursive (i.e. recursively selective)
languages. A language is
supportive if
k queries to the language are more powerful
than
k1 queries for every
k. Recently, Beigel et al. (J. of
Symb. Logic, 65(1):118, 2000) proved a powerful
recursion theoretic theorem: A semirecursive
language is supportive iff it is nonrecursive. For
restricted computational models like polynomial time
this theorem does not hold in this form. Our main
result states that for any reasonable computational
model
a selective language is supportive iff it
is not cheatable. Beigel et al.'s result is a
corollary of this general theorem since `recursively
cheatable' languages are recursive by Beigel's
Nonspeedup Theorem. Our proof is based on a partial
information analysis (see Nickelsen, STACS 97, LNCS
1200, pp. 307318) of the involved languages: We
establish matching upper and lower bounds for the
partial information complexity of the equivalence
and reduction closures of selective languages. From
this we derive the main results as these bounds
differ for different
k.
We give four
applications of our main theorem and the proof
technique. Firstly, the relation
E^{P}_{ktt}(Psel)
\notsubseteq
R^{P}_{(k1)tt}(Psel)
proven by Hemaspaandra et al. (Theor. Comput. Sci.,
155(2):447457, 1996) still holds if we
relativise only the right hand side. Secondly,
we settle an open problem from the same paper:
Equivalence to a Pselective language with k
serial queries cannot generally be replaced
by a reduction using less than
2^{k}1 parallel
queries. Thirdly, the ktruthtable reduction
closures of selectivity classes are
(m,n)verbose iff every walk on the
ndimensional hypercube with transition
counts at most k visits at most m
bitstrings. Lastly, these reduction closures are
(m,n)recursive iff every such walk is
contained in a closed ball of radius
nm.
 Till Tantau:
A Note on the Complexity of the Reachability Problem for Tournaments.
Technical report ECCCTR01092,
Electronic Colloquium on Computational Complexity,
2001.
Go to website  Show abstract
Deciding whether a vertex in a graph is reachable
from another vertex has been studied intensively in
complexity theory and is well understood. For common
types of graphs like directed graphs, undirected
graphs, dags or trees it takes a (possibly
nondeterministic) logspace machine to decide the
reachability problem, and the succinct versions of
these problems (which often arise in hardware
design) are all PSPACEcomplete. In this paper we
study tournaments, which are directed graphs with
exactly one edge between any two vertices. We show
that the tournament reachability problem is first
order definable and that its succinct version is
\Pi^{P}_{2}complete.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Computation with Absolutely No Overhead.
Technical report URCSTR2002779,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2002.
Go to website  Show abstract
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that a
large subclasses of the contextfree languages can
even be accepted in polynomial time with absolutely
no space overhead.
 Mitsunori Ogihara, Till Tantau:
On the Reducibility of Sets Inside NP to Sets with Low Information Content.
Technical report URCSTR2002782,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2002.
Go to website  Show abstract
We study whether sets inside NP can be reduced to sets with low information content but possibly still high computational complexity. Examples of sets with low information content are tally sets, sparse sets, Pselective sets and membership comparable sets. For the graph automorphism and isomorphism problems GA and GI, for the directed graph reachability problem GAP, for the determinant function det, and for logspace selfreducible languages we establish the following results:

If GA is \le^{p}_{tt}reducible to a
Pselective set, then GA \in P.

If GI is O(log)membership comparable, then GI \in RP.

If GAP is logspace O(1)membership comparable, then GAP \in L.

If det is \le^{log}_{T}reducible to an Lselective set, then det \in FL.

If A is logspace selfreducible and
\le^{log}_{T}reducible to an
Lselective set, then A \in L.
The last result is a strong logspace version of the characterisation of P as the class of selfreducible Pselective languages. As P and NL have logspace selfreducible complete sets, it also establishes a logspace analogue of the conjecture that if SAT is \le
^{p}_{T}reducible to a Pselective set, then SAT \in P.
 Till Tantau:
A Note on the Power of Extra Queries to Membership Comparable Sets.
Technical report ECCCTR02044,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2002.
Go to website  Show abstract
A language is called kmembership comparable
if there exists a polynomialtime algorithm that
excludes for any k words one of the
2^{k} possibilities for their
characteristic string. It is known that all
membership comparable languages can be reduced to
some Pselective language with polynomially many
adaptive queries. We show however that for all
k there exists a (k+1)membership
comparable set that is neither truthtable reducible
nor sublinear Turing reducible to any
kmembership comparable set. In particular,
for all k > 2 the number of adaptive queries
to Pselective sets necessary to decide all
kmembership comparable sets is
Omega(n) and O(n^{3}). As this
shows that the truthtable closure of Psel is a
proper subset of Pmc(log), we get a proof of
Sivakumar's conjecture that O(log)membership
comparability is a more general notion than
truthtable reducibility to Psel.
 Till Tantau:
Logspace Optimisation Problems and Their Approximation Properties.
Technical report ECCCTR03077,
Electronic Colloquium on Computational Complexity,
2003.
Go to website  Show abstract
This paper introduces logspace optimisation problems
as an analogue of the widely studied polynomialtime
optimisation problems. Similarly to the
polynomialtime setting, logspace optimisation
problems can have vastly different approximation
properties, even though the underlying decision
problems have the same computational complexity. In
order to study the approximability of logspace
optimisation problems in a systematic way,
polynomialtime approximation classes are
transferred to logarithmic space. Appropriate
reductions are defined and optimisation problems are
presented that are complete for these classes. It is
shown that under the assumption L != NL some natural
logspace optimisation problems cannot be
approximated with a constant ratio; some can be
approximated with a constant ratio, but do not
permit a logspace approximation scheme; and some
have a logspace approximation scheme, but cannot be
solved in logarithmic space. An example of a problem
of the latter type is the problem of finding the
shortest path between two vertices of a tournament.
 Till Tantau:
Weak Cardinality Theorems for FirstOrder Logic.
Technical report ECCCTR03024,
Electronic Colloquium on Computational Complexity,
2003.
Go to website  Show abstract
Kummer's cardinality theorem states that a language
is recursive if a Turing machine can exclude for any
n words one of the n + 1 possibilities
for the number of words in the language. It is known
that this theorem does not hold for polynomialtime
computations, but there is evidence that it holds
for finite automata: at least weak cardinality
theorems hold for finite automata. This paper shows
that some of the recursiontheoretic and
automatatheoretic weak cardinality theorems are
instantiations of purely logical theorems. Apart
from unifying previous results in a single
framework, the logical approach allows us to prove
new theorems for other computational models. For
example, weak cardinality theorems hold for
Presburger arithmetic.
 Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
OverheadFree Computation, DCFLs, and CFLs.
Technical report URCSTR2004844,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2004.
Go to website  Show abstract
We study Turing machines that are allowed absolutely
no space overhead. The only work space the machines
have, beyond the fixed amount of memory implicit in
their finitestate control, is that which they can
create by cannibalizing the input bits' own
space. This model more closely reflects the
fixedsized memory of real computers than does the
standard complexitytheoretic model of linear
space. Though some contextsensitive languages
cannot be accepted by such machines, we show that
all contextfree languages can be accepted
nondeterministically in polynomial time with
absolutely no space overhead, and that all
deterministic contextfree languages can be accepted
deterministically in polynomial time with absolutely
no space overhead.
 Arfst Nickelsen, Till Tantau, Lorenz Weizäcker:
Aggregates with Component Size One Characterize Polynomial Space.
Technical report ECCCTR04028,
Electronic Colloquium on Computational Complexity,
2004.
Go to website  Show abstract
Aggregates are a computational model similar to
circuits, but the underlying graph is not
necessarily acyclic. Logspaceuniform
polynomialsize aggregates decide exactly the
languages in PSPACE; without uniformity condition
they decide the languages in PSPACE/poly. As a
measure of similarity to boolean circuits we
introduce the parameter component size. We prove
that already aggregates of component size 1 are
powerful enough to capture polynomial space. The
only type of cyclic components needed to make
polynomialsize circuits as powerful as
polynomialsize aggregates are binary xorgates
whose output is fed back to the gate as one of the
inputs.
 Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
Technical report URCSTR905,
Technische Berichtsreihe der Universität Rochester, Computer Science Department,
2006.
Go to website  Show abstract
A king in a directed graph is a node from which each
node in the graph can be reached via paths of length
at most two. There is a broad literature on
tournaments (completely oriented digraphs), and it
has been known for more than half a century that all
tournaments have at least one king
[Lan53]. Recently, kings have proven useful in
theoretical computer science, in particular in the
study of the complexity of reachability problems
[Tan01,NT05]. and semifeasible sets
[HNP98,HT06,HOZZ06].
In this paper, we study the complexity of
recognizing kings. For each succinctly specified
family of tournaments, the king
problem is already known to belong to $\Pi_2^p$
[HOZZ06]. We prove that the complexity of kingship
problems is a rich enough vocabulary to pinpoint
every nontrivial manyone degree in $\Pi_2^p$. That
is, we show that \emph{every} set in $\Pi_2^p$ other
than $\emptyset$ and $\Sigma^*$ is equivalent to a
king problem under $\leq_m^p$reductions. Indeed, we
show that the equivalence can even be instantiated
via relatively simple padding, and holds even if the
notion of kings is redefined to refer to kkings
(for any fixed $k \geq 2$)nodes from which the
all nodes can be reached via paths of length at most
k.
Using these and related techniques, we obtain a
broad range of additional results about the complexity of king
problems, diameter problems, radius problems, and
initial component problems. It follows easily from
our proof approach that the problem of testing
kingship in succinctly specified graphs (which need
not be tournaments) is $\Pi_2^p$complete. We show
that the radius problem for arbitrary succinctly
represented graphs is $\Sigma_3^p$complete, but
that in contrast the diameter problem for arbitrary
succinctly represented graphs (or even tournaments)
is $\Pi_2^p$complete. Yet again in contrast, we
prove that initial component languages (which ask
whether a given node can reach all other nodes in
its tournament) all fall within $\Pi_2^p$, yet
cannot be $\Pi_2^p$completeor even
NPhardunless P=NP.
 Till Tantau:
The Descriptive Complexity of the Reachability Problem As a Function of Different Graph Parameters.
Technical report ECCCTR06035,
Electronic Colloquium on Computational Complexity,
2006.
Go to website  Show abstract
The reachability problem for graphs cannot be
described, in the sense of descriptive complexity
theory, using a single firstorder formula. This is
true both for directed and undirected graphs, both
in the finite and infinite. However, if we restrict
ourselves to graphs in which a certain graph
parameter is fixed to a certain value, firstorder
formulas often suffice. A trivial example are graphs
whose number of vertices is fixed to n. In
such graphs reachability can be described using a
firstorder formula with a quantifier nesting depth
of log_{2} n, which is both a lower
and an upper bound. In this paper we investigate how
the descriptive complexity of the reachability
problem varies as a function of graph parameters
such as the size of the graph, the clique number,
the matching number, the independence number or the
domination number. The independence number turns out
to be the by far most challenging graph parameter.
 Bodo Manthey, Till Tantau:
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise.
Technical report ECCCTR07039,
Electronic Colloquium on Computational Complexity,
2007.
Go to website  Show abstract
Binary search trees are a fundamental data structure and their height plays a key role in the analysis of divideandconquer algorithms like quicksort. Their worstcase height is linear; their average height, whose exact value is one of the beststudied problems in averagecase complexity, is logarithmic. We analyze their smoothed height under additive noise: An adversary chooses a sequence of n real numbers in the range [0,1]; each number is individually perturbed by adding a random value from an interval of size d; and the resulting numbers are inserted into a search tree. The expected height of this tree is called smoothed tree height. If d is very small, namely for d < 1/n, the smoothed tree height is the same as the worstcase height; if d is very large, the smoothed tree height approaches the logarithmic averagecase height. An analysis of what happens between these extremes lies at the heart of our paper: We prove that the smoothed height of binary search trees is $Theta (sqrt{n/d} + log n)$, where d >= 1/n may depend on n. This implies that the logarithmic averagecase height becomes manifest only for $d in Omega (n/log^2 n)$. For the analysis, we first prove that the smoothed number of lefttoright maxima in a sequence is also $Theta (sqrt{n/d} + log n)$. We apply these findings to the performance of the quicksort algorithm, which needs $Theta(n^2)$ comparisons in the worst case and $Theta(n log n)$ on average, and prove that the smoothed number of comparisons made by quicksort is $Theta(n/d+1 sqrt{n/d} + n log n)$. This implies that the averagecase becomes manifest already for $d in Omega(sqrt[3]{n/logsqr n})$.
 Till Tantau:
Complexity of the Undirected Radius and Diameter Problems for Succinctly Represented Graphs.
Technical report SIIMTRA0803,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
Show PDF  Till Tantau:
Generalizations of the HartmanisImmermanSewelson Theorem and Applications to Infinite Subsets of PSelective Sets.
Technical report ECCCTR08027,
Electronic Colloquium on Computational Complexity,
2008.
Show PDF  Show abstract
The HartmanisImmermanSewelson theorem is the classical link between the exponential and the polynomial time realm. It states that NE = E if, and only if, every sparse set in NP lies in P. We establish similar links for classes other than sparse sets: 1. E = UE if, and only if, all functions f: {1}^* to Sigma^* in NPSV_g lie in FP. 2. E = NE if, and only if, all functions f: {1}^* to Sigma^* in NPFewV lie in FP. 3. E = E^NP if, and only if, all functions f: {1}^* to Sigma^* in OptP lie in FP. 4. E = E^NP if, and only if, all standard left cuts in NP lie in P. 5. E = EH if, and only if, PH cap P/poly = P. We apply these results to the immunity of Pselective sets. It is known that they can be biimmune, but not Pi_2^p/1immune. Their immunity is closely related to topToda languages, whose complexity we link to the exponential realm, and also to king languages. We introduce the new notion of superkings, which are characterized in terms of existsforallpredicates rather than forallexistspredicates, and show that king languages cannot be Sigma_2^pimmune. As a consequence, Pselective sets cannot be Sigma_2^/1immune and, if E^NP^NP = E, not even P/1immune.
 Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, Till Tantau:
Dynamic Kernels for Hitting Sets and Set Packing.
Technical report ,
Electronic Colloquium on Computational Complexity,
2019.
Go to website  Show abstract
Computing kernels for the hitting set problem (the problem of
finding a size$k$ set that intersects each hyperedge of a
hypergraph) is a wellstudied computational problem. For hypergraphs
with $m$ hyperedges, each of size at most~$d$, the best algorithms
can compute kernels of size $O(k^d)$ in time $O(2^d m)$. In this
paper we generalize the task to the \emph{dynamic} setting where
hyperedges may be continuously added and deleted and we always have
to keep track of a hitting set kernel (including moments when no
size$k$ hitting set exists). We present a deterministic solution,
based on a novel data structure, that needs worstcase time
$O^*(3^d)$ for updating the kernel upon hyperedge inserts and
time~$O^*(5^d)$ for updates upon deletions  thus nearly matching
the time $O^*(2^d)$ needed by the best static algorithm per
hyperedge. As a novel technical feature, our approach does not use
the standard replacesunflowersbytheircores methodology, but
introduces a generalized concept that is actually easier to compute
and that allows us to achieve a kernel size of $\sum_{i=0}^d k^i$
rather than the typical size $d!\cdot k^d$ resulting from the Sunflower
Lemma. We also show that our approach extends to the dual problem of
finding packings in hypergraphs (the problem of finding $k$ pairwise
disjoint hyperedges), albeit with a slightly larger kernel size of
$\sum_{i=0}^d d^i(k1)^i$.
 Malte Skambath, Till Tantau:
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration Analysis of Binary Search Trees.
Technical report arXiv:1608.08385,
CoRR,
2016.
Go to website  Show abstract
While the algorithmic drawing of static trees is wellunderstood and wellsupported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular TEX typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running TEX on the documents then results in documents in the SVG format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NPcomplete.
 Christoph Stockhusen, Till Tantau:
Completeness Results for Parameterized Space Classes.
Technical report arxiv:1308.2892,
ArXiv,
2013.
Go to website  Show abstract
The parameterized complexity of a problem is considered "settled" once it has
been shown to lie in FPT or to be complete for a class in the Whierarchy or a
similar parameterized hierarchy. Several natural parameterized problems have,
however, resisted such a classification. At least in some cases, the reason is
that upper and lower bounds for their parameterized space complexity have
recently been obtained that rule out completeness results for parameterized
time classes. In this paper, we make progress in this direction by proving that
the associative generability problem and the longest common subsequence problem
are complete for parameterized space classes. These classes are defined in
terms of different forms of bounded nondeterminism and in terms of simultaneous
timespace bounds. As a technical tool we introduce a "union operation" that
translates between problems complete for classical complexity classes and for
Wclasses.
 Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space Complexity of Parameterized Problems.
Technical report ,
ECCC,
2012.
Go to website  Show PDF  Show abstract
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of wellstudied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L=NL. For a number of further natural parameterized problems, including the longest common subsequence problem, the acceptance problem for multihead automata, and the associative generability problem we show that they lie in or are complete for different parameterized space classes. Our results explain why previous attempts at proving completeness of different problems for parameterized time classes have failed.
 Michael Elberfeld, Andreas Jakoby, Till Tantau:
Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth.
Technical report ECCCTR11128,
Electronic Colloquium on Computational Complexity,
2011.
Show PDF  Go to website  Show abstract
An algorithmic meta theorem for a logic and a class C of structures states that all problems expressible in this logic can be solved efficiently for inputs from C. The prime example is Courcelle's Theorem, which states that monadic secondorder (MSO) definable problems are lineartime solvable on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that MSOdefinable problems are (a) solvable by uniform constantdepth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmicdepth circuit families (NC1 for decision problems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata's computations algebraically using convolution circuits; and a lemma on computing balanced width3 tree decompositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.
 Michael Elberfeld, Andreas Jakoby, Till Tantau:
Logspace Versions of the Theorems of Bodlaender and Courcelle.
Technical report ECCCTR10062,
Electronic Colloquium on Computational Complexity,
2010.
Show PDF  Go to website  Show abstract
Bodlaender's Theorem states that for every k there is a lineartime algorithm that decides whether an input graph has tree width k and, if so, computes a widthk tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic secondorder formula φ and for
every k there is a lineartime algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic secondorder logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
 Michael Elberfeld, Till Tantau:
Phylogeny and ParsimonyBased Haplotype Inference with Constraints.
Technical report SIIMTRA1001,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2010.
Show PDF  Show abstract
Haplotyping, also known as haplotype phase prediction, is
the problem of predicting likely haplotypes based on genotype data.
One fast computational haplotyping method is based on an evolution
ary model where a perfect phylogenetic tree is sought that explains the
observed data. In their CPM’09 paper, Fellows et al. studied an exten
sion of this approach that incorporates prior knowledge in the form of
a set of candidate haplotypes from which the right haplotypes must be
chosen. While this approach is attractive to increase the accuracy of
haplotyping methods, it was conjectured that the resulting formal prob
lem constrained perfect phylogeny haplotyping might be NPcomplete. In
the paper at hand we present a polynomialtime algorithm for it. Our
algorithmic ideas also yield new fixedparameter algorithms for related
haplotyping problems based on the maximum parsimony assumption.
 Michael Elberfeld, Till Tantau:
Computational Complexity of PerfectPhylogenyRelated Haplotyping Problems.
Technical report SIIMTRA0802,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
Show PDF  Show abstract
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. This
problem, which has strong practical applications, can be approached
using both statistical as well as combinatorial methods. While the
most direct combinatorial approach, maximum
parsimony, leads to NPcomplete problems, the perfect phylogeny
model proposed by Gusfield yields a problem, called PPH, that can be
solved in polynomial (even linear) time. Even this may
not be fast enough when the whole genome is studied, leading to the
question of whether parallel algorithms can be used to solve the PPH
problem. In the present paper we answer this question affirmatively,
but we also give lower complexity bounds on its complexity. In
detail, we show that the problem lies in Mod$_2$L, a subclass of the
circuit complexity class NC$^2$, and is hard for
logarithmic space and thus presumably not in NC$^1$. We also
investigate variants of the PPH problem that have been studied in
the literature, like the perfect path phylogeny haplotyping problem
and the combined problem where a perfect phylogeny of maximal parsimony
is sought, and show that some of these variants are TC$^0$complete or
lie in AC$^0$.
 Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
Technical report SIIMTRA0805,
Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck,
2008.
Show PDF  Show abstract
Haplotyping, also known as haplotype phase prediction, is the
problem of predicting likely haplotypes based on genotype data. One
fast haplotyping method is based on an evolutionary model where a
perfect phylogenetic tree is sought that explains the
observed data. Unfortunately, when data entries are missing, as is
often the case in real laboratory data, the resulting formal problem
IPPH, which stands for incomplete perfect phylogeny
haplotyping, is NPcomplete and no theoretical results are known
concerning its approximability, fixedparameter tractability or
exact algorithms for it. Even radically simplified versions, such as
the restriction to phylogenetic trees consisting of just two directed
paths from a given root, are still NPcomplete, but here, at least,
a fixedparameter algorithm is known. We generalize this algorithm
to arbitrary tree topologies and present the first
theoretical analysis of an algorithm that works on arbitrary
instances of the original IPPH problem. At the same time we
also show that restricting the tree topology does not always make
finding phylogenies easier: while the incomplete directed perfect phylogeny
problem is wellknown to be solvable in polynomial time, we show
that the same problem restricted to path topologies is NPcomplete.