- Max Bannach, Zacharias Heinrich, Till Tantau, Rüdiger Reischuk:
Dynamic Kernels for Hitting Sets and Set Packing.
In Proceedings of the 16th International Symposium on Parameterized and Exact Computation (IPEC 2021), Volume 214 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
2021.
Go to website | Show abstract
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m? 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant).
We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.
- Florian Thaeter, Rüdiger Reischuk:
Scalable k-anonymous Microaggregation: Exploiting the Tradeoff between Computational Complexity and Information Loss.
In Proceedings of the 18th International Conference on Security and Cryptography (SECRYPT 2021), pp. 87-98.
SCITEPRESS,
2021.
Show abstract
k-anonymous microaggregation is a standard technique to improve privacy of individuals whose personal data is used in microdata databases. Unlike semantic privacy requirements like differential privacy, k-anonymity allows the unrestricted publication of data, suitable for all kinds of analysis since every individual is hidden in a cluster of size at least k. Microaggregation can preserve a high level of utility, that means small information loss caused by the aggregation procedure, compared to other anonymization techniques like generalization or suppression. Minimizing the information loss in k-anonymous microaggregation is an NP-hard clustering problem for k ? 3. Even more, no efficient approximation algorithms with a nontrivial approximation ratio are known. Therefore, a bunch of heuristics have been developed to restrain high utility – all with quadratic time complexity in the size of the database at least. We improve this situation in several respects providing a tradeoff between computational effort and utility. First, a quadratic time algorithm ONA* is presented that achieves significantly better utility for standard benchmarks. Next, an almost linear time algorithm is developed that gives worse, but still acceptable utility. This is achieved by a suitable adaption of the Mondrian clustering algorithm. Finally, combining both techniques a new class MONA of parameterized algorithms is designed that deliver competitive utility for user-specified time constraints between almost linear and quadratic.
- Marcel Wienöbst, Maciej Liskiewicz:
An Approach to Reduce the Number of Conditional Independence Tests in the PC Algorithm.
In Proceeding of 44th German Conference on AI (KI 2021), Volume 12873 of Lecture Notes in Computer Science, pp. 276-288.
Springer,
2021.
Go to website - Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Extendability of Causal Graphical Models: Algorithms and Computational Complexity.
In Proc. of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence (UAI 2021), Volume 161 of Communications in Computer and Information Science, pp. 1248-1257.
PMLR,
2021.
Go to website - Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Polynomial-Time Algorithms for Counting and Sampling Markov Equivalent DAGs.
In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI'21), pp. 12198-12206.
AAAI Press,
2021.
Go to website | Show PDF | Show abstract
Counting and sampling directed acyclic graphs
from a Markov equivalence class are fundamental tasks in graphical causal
analysis. In this paper, we show that these tasks can be performed in
polynomial time, solving a long-standing open problem in this
area. Our algorithms are effective and easily implementable.
Experimental results show that the algorithms significantly
outperform state-of-the-art methods.
- Marcel Wienöbst, Max Bannach, Maciej Liskiewicz:
Recent Advances in Counting and Sampling Markov Equivalent DAGs.
In Proceeding of 44th German Conference on AI (KI 2021), Volume 12873 of Lecture Notes in Computer Science, pp. 271-275.
Springer,
2021.
Go to website | Show abstract
Counting and sampling directed acyclic graphs (DAGs) from a Markov equivalence class are fundamental tasks in graphical causal analysis. In this paper, we discuss recently proposed polynomial-time algorithms for these tasks. The presented result solves a long-standing open problem in graphical modelling. Experiments show that the proposed algorithms are implementable and effective in practice. Our paper is an extended abstract of the work [24], honored as an AAAI-21 Distinguished Paper at the 35th AAAI Conference on Artificial Intelligence.