Cheminformatics
While Cheminformatics is formally a very old research field, building the foundations for the field seen from the perspective of theoretical Computer Science is not at all established research. We investigate hardness and develop conceptionally new approaches for problems arising in or motivated from chemistry. We promote modeling of chemistry with algebraic graph transformation approaches. In contrast to other methods, the atomic explicitness of our generic modeling framework often allows for a direct transfer of results from theoretical Computer Science and graph theory.The real-world applicability of our results is very relevant for our research, but as the same underlying principles apply in settings that may considered wildly different by the practicing life-scientist, we always aim at general results and developing generic methods.
For more information, see the separate Cheminformatics Group page.
Online Algorithms
Many real world problems have an online component, some decisions that have to be made immediately, before all related data is available, and the focus is often on minimizing resource consumption. This relates to environmental protection and resource management in general, facilitating cost-effective production and solutions. The study of online algorithms is aimed at designing and analyzing online algorithms.Online problems are optimization problems where the input arrives one request at a time, and each request must be processed without knowledge of future requests. As usual in Computer Science, we want to be able to determine which of several algorithms solves a problem best, and a popular technique in the context of online algorithms is competitive analysis. The over-all goal of a theoretical quality measure is to predict behavior of algorithms in practice. In that respect, competitive analysis works well in some cases, but, as pointed out by the inventors and others, fails to discriminate between good and bad algorithms in other cases.
Performance Measures
Ever since its introduction, researchers have worked on improving competitive analyses, defining variants, or defining measures based on other concepts to improve on the situation. Relative worst-order analysis (RWOA), a technique for assessing the relative quality of online algorithms, was developed by our group and is one of the most thoroughly tested such proposals RWOA has been shown to be applicable to a wide variety of problems and provide separations of algorithms, not obtainable using competitive analysis, corresponding better to experimental results or intuition in many cases.The group has written a survey of this area, where we motivate and define RWOA, outline the background for its introduction, survey the most important results, and compare it to other measures.
A significant part of the groups work on online algorithm has dealt with this measure, but also with other issues having to do with improving the accuracy of prediction of the behavior of online algorithms.
Advice Complexity
In online scenarios requests arrive over time, and each request must be serviced in an irrevocable manner before the next request arrives. Online algorithms with advice is an area of research where one attempts to measure how much knowledge of future requests is necessary to achieve a given performance level, as defined by the competitive ratio. When this knowledge, the advice, is obtainable, this leads to practical algorithms, called semi-online algorithms. On the other hand, each negative result gives robust results about the limitations of a broad range of semi-online algorithms.The group has written a survey, where we explain the models for online algorithms with advice, motivate the study in general, present some examples of the work that has been carried out, and include a quite complete set of references, organized by problem studied. The group has been involved in many of these results.
Online Algorithms with Predictions
In extension of the work on advice complexity, which can be viewed asonline algorithms with predictions that are guaranteed to be accurate, we have broadened our scope within algorithms dealing with uncertainty, and we have recently focused on online algorithms with predictions. Among other things, this opens up for an integration of machine learning components with the online algorithms. The models for this topic are under development in an international context, but the overall focus is on establishing results that are good both with regards to consistency (the algorithm does well when the predictions are good) and robustness (the algorithm comes with some reasonable guarantees even when predictions are bad.Graph Theory
Besides being a very active research area in itself, graph theory is also an important modeling tool for real-life problems. Examples include gps systems (route calculations), assigning workers to jobs, booking systems, structuring of information, scheduling construction work, etc. Many problems are best modeled as directed graphs. While undirected graph theory is very well developed with deep theories, the structure of directed graphs is much less well understood, and often the directed version of similar problems for undirected graphs is much harder to solve algorithmically.The graph theory group at IMADA has contributed significantly to the better understanding of both structural and algorithmic aspects of directed graphs. The group works on theory development, discovering new connections between different topics, and on finding efficient solutions to difficult (NP-hard) problems for special classes of (di)graphs. We also work on parameterized algorithms for difficult problems where the goal is to develop algorithms that perform well provided that a given parameter is low.
Cryptology and Complexity Theory
The efficiency of cryptographic systems and protocols has been the focus of our recent work in cryptology. The study of multiplicative complexity is being extended to general techniques for reducing the gate complexity of cryptographic functions, including AES.We also study the advice complexity of online problems, and we study the border between tractable problems (such as P or FPT) and hard problems (such as NP-hard, W[1]-hard, W[2]-hard, etc).
Data Structures
Older work include results on relaxed balance, where we generalize search tree rebalancing operations so they can be postponed and scheduled flexibly at a provably insignificant increase in amortized complexity.