BrandingBranding
Faculty
Mathematikon Entrance

Our Faculty is the academic home of researchers, teachers, and students of Mathematics and Computer Science. Its institutes and facilities are housed in the Mathematikon, pleasantly located on the Campus Neuenheimer Feld of Heidelberg University. Welcome!

Doctorate
Mathematikon Staircase

The Doctorate signifies a proven ability to conduct independent scientific research. Under the auspices of the Combined Faculty of Mathematics, Engineering and Natural Sciences, we confer the academic degree Dr. rer. nat. in the subjects of mathematics and computer science.

Studies
Mathematikon Library

Students interested in Mathematics, Computer Science, or an interdisciplinary field, pursuing a B.Sc., M.Sc., or M.Ed., and aiming for a career in research, teaching, or the private sector, will find here in Heidelberg a full range of first-class courses for a challenging and enriching educational experience in an intellectually stimulating environment with historical cachet.

Outreach
Mathematikon Lobby

We seek to promote the interest in mathematics and computer science by organizing events for schools and for the broader public. Alumns and newcomers join in and contribute to shared knowledge and contacts.

home icon
envelope icon
Mathematikon Courtyard
Mathematics and Computer Science — Research

Algorithms and Theoretical Computer Science

In the modern world, we are surrounded by algorithms. Every task performed by a computer, be it processing images, connecting to the internet, or using a credit card, requires a precise step-by-step guide that a computer can understand. This guide is an algorithm. But algorithms are not only for computers. At least 3600 years ago, algorithms were used in mathematics and to this day, many mathematical proofs are algorithmic. There are three primary questions one can ask about an algorithm; does an algorithm exist that solves a particular problem, if so, how much resources (computation, communication, memory, storage) does it require, and how fast is it when executed on a specific hardware?

In Heidelberg we work on algorithms in various perspectives; in particular, some researchers focus on the performance of algorithms on real world data or on the optimal usage of hardware to speed up algorithms whereas others are interested whether there even exist algorithms that solve certain problems, independent on whether they are feasible in practice. In particular, we prove structural theorems on discrete objects, such as graphs and hypergraphs, that form the basis of new theoretical techniques; we explore how different, often well-known, theoretical techniques work in practice, and also engineer and optimize algorithms to achieve the best performance.
As part of our research, we look at different problems and techniques that make algorithms scale better. This includes the theory and engineering of (hyper-)graph algorithms, randomized algorithms, massively parallel algorithms (many-core, multi-core, distributed memory), communication efficient algorithms as well as out-of-core algorithms. Our research cluster also develops open source software for solving algorithmic problems, and uses its know-how to solve selected concrete application problems.

Different viewpoints on algorithms

Below we describe four different aspects in which way we attack problems involving algorithms. For many of us, graphs and hypergraphs are the objects we are working with. They consist of a (finite) set of vertices, sometimes also called nodes, and a set of edges. The vertices have labels, say, integer labels, up to , where is the number of vertices; however, in applications each vertex often represents a certain object, for example, a person, building, task, objective or variable.

Edges in graphs are simply a set of two vertices, that is, we think of these two vertices being in relation to each other. It is common to visualize vertices by dots and edges by lines between two vertices; nevertheless, a graph is a priori independent of such a drawing. In hypergraphs we also allow edges of larger size, that is, with three or more vertices. This is usually visualized by a bubble around all vertices in the (hyper)edge.

Edges in graphs can also be directed, that is, have a orientation and point from one vertex to another one, or have weights associated with them.

Graphs are fundamental structures in mathematics. They are very versatile and hence they appear in many contexts. However, although (hyper)graphs are so easy to define, they exhibit very fascinating and partly mysterious behavior that has been subject to very active research in the past decades.

Algorithm Engineering

Traditionally, algorithms are designed using simple models of problems and machines. In turn, important results are provable, such as performance guarantees for all possible inputs. This often yields elegant solutions being adaptable to many applications with predictable performance for previously unknown inputs.  In algorithm theory, however, taking up and implementing an algorithm is part of the application development.  Unfortunately, this mode of transferring results is a slow process and sometimes the theoretically best algorithms perform poor in practice.  This causes a growing gap between theory and practice: Realistic hardware with its parallelism, memory hierarchies etc. is diverging from traditional machine models.

In contrast to algorithm theory, algorithm engineering uses an innovation cycle where algorithm design based on realistic models, theoretical analysis, efficient implementation, and careful experimental evaluation using real-world inputs closes gaps between theory and practice and leads to improved application codes and reusable software libraries (see here). This yields results that practitioners can rely on for their specific application.

The algorithm engineering group tackles a broad spectrum of scalable graph algorithms in particular in the context of very large inputs. In general, research is based on five pillars: multilevel algorithms, practical kernelization, parallelization, memetic algorithms and dynamic algorithms that are highly interconnected. To this end, our group engineers algorithms that improve known methods. Typically, we release the techniques and algorithms developed as easy-to-use open source software. Examples include a widely used library of algorithms for (hyper)graph partitioning, graph drawing, (weighted) independent sets, graph clustering, graph generation, minimum cuts, process mapping and many more.

Theoretical Computer Science

One of the most famous algorithmic problems is the Travelling Salesman problem. Given cities and distances between each pair of cities, what is the shortest tour through all of them? (This can be modelled by a complete graph on vertices with weights on the edges.) Surely, we can simply check all potential routes and choose the shortest one. However, this is not feasible at all! There are roughly such routes, which is a gigantic number even if is only moderately large, say something beyond 20.

Hence, can we do better than that? Yes, but it seems that in general an exponential number in of computation steps are necessary.

Proving that algorithms do not need more than a guaranteed number of steps is known as a worst case analysis. To this end, frequently heavy mathematical theorems are exploited to exhibit structures on graphs that in turn yield improvements on the running time of algorithms.

Our research focus lies on the investigation of graphs and hypergraphs under various viewpoints. This includes questions of the following form:

  • When is a particular "simple" graph is subgraph of another graph?
  • When do we obtain decompositions of the vertex set or edge set of a graph into another "simple" graph?
  • How do typical graphs behave differently in comparison to worst case instances?

Among other results, we solved the famous Oberwolfach problem for large instances, which was posed by Gerhard Ringel in the 1960s.

Application Specific Computing

For most problems there are many different algorithms that solve it and their resource consumption (computation, communication, memory, storage) depends on the problem parameters (data sizes, data arrangement, data types, data values). So already from a theoretic point of view different problem parameters ask for different algorithms, but there are too many algorithms to evaluate them all with respect to resource consumption. Moreover, most algorithms have many different implementations and the execution time of the each implementation depends on the specific hardware. So from a practical point of view it is very difficult to know for given problem parameters and hardware what the best algorithm and implementation are. We can afford to evaluate only very few combinations, because development of good algorithms and implementations are lengthy processes, thus finding the best combination is like looking for a needle in a haystack.

We tackle this challenge by approaching the best combination of algorithm and implementation from the top and bottom. Coming from the bottom means creating algorithmic building blocks with almost optimal performance on specific hardware, i.e. the given computation cannot be performed in significantly less time. Having such building blocks we can quickly assemble algorithms with high performance if the interaction between the blocks is also efficient. In addition, coming from the top we use application guidance into the type of algorithmic structures we need to target. We work mostly with scientific computing applications where the guidance is given by the numerical analysis of the schemes. The building blocks are then parallel data selection and parallel linear algebra operations.

For specific problems we succeed in designing solutions that are almost optimal, resulting in staggering performance on parallel hardware (GPUs, many-core CPUs, FPGAs). However, for most problems such resounding solutions are unknown and even where specific cases are solved more general problem formulations remain a challenge.

Optimization for Machine Learning

We focus on large-scale combinatorial optimization problems with applications in computer vision and bio-informatics. "Combinatorial" means that the solution must be selected from a finite but exponentially growing set, as for example the set of all possible subsets of a given set, permutations or paths in a given graph. Most of such problems can be formulated in a common format of integer linear programs and solved by off-the-shelf software. "Large-scale" means that the size of the problems makes their solution with off-the-shelf methods inefficient or even practically impossible. Indeed, this is often the case in such applications as stereovision and image segmentation, estimation of object rotation and translation in space, as well as cell matching and tracking in microscopic images.

We not only develop dedicated algorithms to optimize given combinatorial problems, but also learn problem parameters from training data. To this end we combine artificial neural networks with the most efficient combinatorial techniques.

Working groups and junior scientist/research group leaders

Prof. Dr. Christian SchulzAlgorithm Engineering

Scalable Graph Algorithms, Practical Data Reductions, Dynamic Graph Algorithms, Memetic Algorithms

 
Prof. Dr. Christian Schulz
Jun.Prof. Dr. Felix JoosTheoretical Computer Science

Graph Theory, algorithms on graphs, Discrete Mathematics

 
Jun.-Prof. Dr. Felix Joos
Prof. Dr. rer. nat. Robert StrzodkaApplication Specific Computing

Parallel algorithms and hardware (GPU, many-core CPU, FPGA) in relation to data representation, data access, data structure, numerical methods and programming abstractions

 
Prof. Dr. Robert Strzodka
Dr. Bogdan SavchynskyyOptimization for Machine Learning

Large-scale discrete optimization, discrete graphical models, assignment and tracking, learning of combinatorial problems

 
Bogdan Savchynskyy
PD Dr. Wolfgang MerkleTheoretical Computer Science

Logic and Computability

 
PD Dr. Wolfgang Merkle
Last updated on Apr 14, 2023 at 6:55 AM