Key Areas

Overview

We do research in the areas of complexity theory and mathematical logic. Our key areas are circuit complexity, complexity of counting and enumeration problems, parameterised complexity, descriptive complexity as well as team logics and satisfiability problems

  • Complexity Theory

    Complexity theory studies the computational complexity of problems in different computation models. Typical models of computation are Turing machines, random access machines and Boolean (or arithmetic) circuits. Complexity measures such as runtime and storage space are defined and problems are classified based on required resources for computationally solving the problem. In this regard, upper and lower bounds for membership of problems in these classes are proven. The detailed study of upper bounds, especially when it comes to a more fine-grained study, is usually seen as part of the field of algorithms.

    In the context of our key research areas it makes sense to distinguish branches of complexity theory based on the following criteria:

    • Type of problems: decision problems (is there a solution?), counting problems (how many solutions are there?), enumeration problems (enumerate all solutions), etc.
    • Computation model: Turing machine, Random access machine, Boolean (or arithmetic) circuits, etc.
    • Exactness of solving the problem: exact solution, approximation, computation is only efficient on certain inputs (parameterised complexity), etc.

    Beside these questions related to the complexity of individual problems, complexity theory is also concerned with structural properties of complexity classes to get a better understanding of these classes. For example, closure properties are considered. For classes of decision problems typical candidates are closure under different Boolean operations.

  • Logic

    Mathematical Logic studies different kinds of logical formalisms.

    Beside the well-known examples such as first-order logic and propositional logic, we mainly study different kinds of team logics as well as nonclassical logics (especially default logic). The characteristic of team logics is that sets of assignments, so called teams, are considered instead of individual assignments. These teams can be interpreted as instances of databases and team logics allow to formalise many important concept from the area of data bases.

    Our research focuses on questions of computational complexity of certain problems in these logics, such as satisfiabilty or model checking problems.

    A subfield located exactly in the intersection of complexity theory and logic is descriptive complexity. Here, the goal is to identify logics which allow to define exactly the problems in a certain complexity class.

Our Key Research Areas

  • Circuit Complexity

    The fine structure of complexity classes that are defined via Boolean circuits of small depth (i.e., NC1, SAC1 and subclasses) is studied. In particular, methods and techniques from logic (descriptive complexity, finite model theory) and algebra (groupoid theory, algebraic automata theory) turn out to be important.

  • Counting Complexity

    Classical complexity theory studies decision problems, that is, problems that ask whether the input is contained in a certain set. Many natural problems are not of this form and can rather be formalised as a function mapping inputs to numbers. This class of problems encompasses problems which ask for the number of solutions, but also many problems from mathematics such as the computation of the determinant.

    The term "counting complexity" is due to the fact that many of these problems are formulated with respect to counting of solutions and also the most important computation models in this area are defined in terms of counting the number of "witnesses of a successful computation". For example, a nondeterministic Turing machine computes the function counting the number of accepting computation paths in this context.

    A complexity theory for counting problems is of independent interest, but the area has also seen a lot of attention, because many results from counting complexity have important implications in classical complexity theory.

  • Enumeration

    So-called enumeration problems consider the output of all solutions of a given problem instance. We investigate the complexity of such problems and measure the delay. The delay describes the time which elapses between two consecutive printed solutions generalised over the full output. This area of research can be dated back to the work of Johnson, Papadimitriou and Yannakakis (On Generating All Maximal Independent Sets. Inf. Process. Lett. 27(3): 119-123 (1988)).

    In the last decade, the area of research on enumeration algorithms has grown rapidly. This can be explained by the vast increase of data which algorithms has to process each day. The main task could arguably be the query to a database, where huge amounts of data has to be processed. This is why the incremental and efficient answer to such requests has become a problem of the daily life. For instance a user of a web search engine wants to get answers of his request very fast. Furthermore, enumeration algorithms become central in applications in the areas of operation research, data mining, bio informatics, computer linguistics as well as constraint satisfaction.

  • Parameterised Complexity

    One of the central insights of computational complexity theory is the hardness of specific problems: Solving problems as, for instance, the satisfiability problem in propositional logic there is no algorithm known running in subexponential time. Most experts believe that such an algorithm does not exist. However, in practice, many people that are confronted with problems which are classified as intractable in this sense, can even solve them on the application level in reasonable time.

    There exist several approaches to understand this gap between theory and practice. One of them studies the parameterised complexity of problems: here, problems are investigated for which a structural property is identified (the parameter). Then, one classifies the complexity with respect to the input length and this parameter.

    On that level, one introduces complexity classes which summarise problems solvable in runtimes that might be arbitrarily in the parameter. On that account, a parameterised problem is considered to be efficiently solvable when an algorithm exists with a runtime being a product of a polynomial in the input length and an arbitrary computable function in the parameter. This means that the runtime might be arbitrarily bad in the parameter. However, regarding fixed parametric values, the runtime is polynomial.

    One aims for parameters that have a practical justification and are often very small, constant or at most slowly growing. If the problem under such a parameterisation is efficiently solvable, this already yields an explanation for the intractability of the problem.

  • Descriptive Complexity

    Descriptive complexity studies properties of complexity classes on a structural level. It aims to identify logics in which exactly the problems from a certain complexity class are definable. While these characterizations of complexity classes are also of independent interest, there is also hope to be able to facilitate methods from mathematical logic to obtain new results for the characterized complexity classes.

  • Team Logics

    Team semantics was introduced by Wilfrid Hodges in 1997 in order to replace the game theoretical semantics of logik of incomplete information by a compositional semantics, as it is common for many logics. Here, formulas are no longer evaluated on single assignments, as it is usual in Tarski semantics, but instead on sets of such assignments, called teams.

    Based on team semantics, Jouko Väänänen introduced Dependence Logic in 2007, which extends first-order logic by the dependence atom =(x,y). Intuitively, it states that in the whole team the value of the variable y does depend only on x. It turned out that this even enables statements of second-order logic to be formulated. Since then, many similar non-classical atoms have been introduced to formalize statements about dependence and independence of assignments inside a team.

    We study team logic, in particular from the perspective of computational complexity, expressive power and proof systems, and also investigate propositional, modal and temporal logic variants.

  • Satisfiability Problems

    The satisfiability problem (SAT) asks, given a propositional formula, if the formula is satisfiable. SAT is the classical NP-complete problem and its study has remarkably influenced the development of the area of computational complexity theory. It is well-known that restricting the class of considered formulas allows for efficient decision procedures, e.g., Horn formulas. In this field of attention, we want to analyse the border between intractability and efficient solvability closely. Particularly, we want to focus on possibilities to parallelise or improve space requirements of current algorithms.

    Especially nonmonotonic logics which play a significant role in the areas of knowledge representation, semantic web, or expert systems are considered.

    Methodically, we will utilise techniques from universal algebra, e.g., Post's lattice.

You can find our current research projects here