50 years Univerity of Lübeck

Institute for Theoretical Computer Science

Events 2004


  • Lübecker Informatik-Diplomarbeit gewinnt CAST-Förderpreis IT-Sicherheit 2004. Wir gratulieren Ulrich Wölfel zum 1. Platz in der Kategorie "Studierende".

  • 40-Jahr-Feier der Universität zu Lübeck

    Tag der Offenen Tür am 12. November 2004
  • 25. August 2004, 16 Uhr ct, R 21, Geb. 64, 2. OG
    Ass. Prof. Atsuko Yamaguchi, Biomolecular Engineering Research Institute, Kyoto University/Japan
    Finding the Maximum Common Subgraph for Chemical Compounds in Biological Pathways

    The maximum common subgraph problem is known to be NP-hard, although it has often been applied to various areas. In the field of molecular biology, we can reduce the problem space by analyzing the structures of chemical compounds in biological pathways. In doing so, we have found that the tree-width of chemical compounds are bounded by a constant, and that the possible spanning trees of any compound is polynomially bounded. We present a polynomial time algorithm for finding the maximum common connected induced subgraph of a degree-bounded partial k-tree and a connected graph, the number of whose possible spanning trees is polynomial.

  • 04. Februar 2004,17 h ct, R3/H21
    Prof. Hong Zhu, Dept. of Computer Science, Fudan University, Shanghai/China
    Incentive Compatible Mechanism for Single-Minded Auction

    In the general combinatorial auction, incentive compatibility, which requires agents bidding their bids truthfully, is one of the most important questions. We discuss incentive compatible mechanism based on linear pricing scheme for single-minded auction, a restricted case of combinatorial auction, in which each buyer desires a unique fixed bundle of various commodities, improving the previous works on pricing bundles (i.e. payments of buyers).

  • 03. Februar 2004, 16 h ct, H1/SFS
    Dr. Ning Chen,Dept. of Computer Science, Fudan University, Shanghai/China
    Computational Market Equilibrium

    The computation of market equilibrium is a very important problem in both economic theory and TCS. We review the development of the problem in the near two years and discuss open problem.

  • 07. Januar 2004, 17 h c.t. Uhr, H1, Seefahrtschule
    Prof. Lawrence L. Larmore, School of Computer Science, University of Nevada, Las Vegas
    Knowledge State Algorithms and Paging: Randomization with Limited Information

    The harmonic number Hk is a known lower bound for the competitiveness of any randomized online algorithm for the k-cache problem. It is also known that to achieve this competitiveness, the algorithm must remember the names of some of the pages that have been ejected from the cache. We call those pages "bookmarked" pages.

    We present optimally competitive randomized online algorithms for the 2-cache problem and the 3-cache problem, using a novel technique called "knowledge states." Our algorithms sometimes randomly choose to "forget" information from the past.

    The central device used is an "estimator" which estimates what the optimal cost has been so far, as a function of the state of the optimal offline algorithm. Not only does the algorithm use randomization to make its moves, but the estimator is updated at every step using a Las Vegas technique. The combination of estimator and algorithm distribution is called a knowledge state.

    Our knowledge state algorithm for the 2-cache problem uses only one bookmark, and our knowledge state algorithm for the 3-cache problem uses only two bookmarks. One question is, how many bookmarks are needed by an optimally competitive randomized online algorithm for the k-cache problem, for general k? We know that an algorithm with no bookmarks can achieve competitiveness 2Hk-1. What is the tradeoff between number of bookmarks and competitiveness?

    We have also used knowledge states successfully to achieve the lowest known competitiveness for the randomized 2-server problem on the line, although the result was obtained using a computer and has as yet not been verified by hand.

    In general, the knowledge state approach, for any online problem, is a systematic tool for limiting the complexity of a randomized online algorithm, perhaps at the cost of loss of optimality.