Efficient Algorithms for Massive Graphs (Former Research Group)

Vision and Research Strategy

Large networks, from the Internet and the World Wide Web to social and biological networks, have been the focus of significant research activity in many disciplines. A crucial component in the study of these networks is the development of algorithms, e.g., for disseminating information or balancing load, that work without global knowledge or central control, and are robust to network changes and failures. A second component is, given a large data set about these networks, to approximate certain graph characteristics such as diameter, expansion or number of occurrences of certain subgraphs. Due to the size and complexity of these data sets, it is often only possible to read parts of the data sequentially which leads to the data stream model. The main goal of our research group is to analyze these very large networks and massive data sets by mathematical models and efficient algorithms.

Composition of Group

Our group was founded in August 2012 in the Cluster of Excellence. 

Research Topics and Achievements

Load Balancing

The problem of load balancing on large networks is to find distributed algorithms that balance the load across the units of the network efficiently. Most of our work focuses on two popular communication models, the diffusion and the matching model. For both models, we analyze several iterative protocols that employ the idea of randomized rounding. We show that these protocols almost achieve the same discrepancy within the same number of rounds as their counterparts for the case of divisible load.

Random Walks

The cover time of random walks is an ubiquitous parameter of graphs, which has many interesting relations to electrical networks, spectral graph theory, probability theory, complexity theory etc. We study different ways to speed up random walks with the aim of decreasing their cover time. For instance, we examine random walks that are able to explore their neighborhood in each step or analyze the cover time of parallel random walks.

Rumor Spreading

Randomized rumor spreading are simple epidemic protocols that spread one message (rumor) from one node to all other nodes. This is done by allowing nodes to exchange any rumor with a randomly chosen neighbor in each round. The main question is to determine the number of required rounds to spread the rumor to all nodes on different networks. Our research includes: (1) analyzing the rumor spreading time on different network models, e.g., social networks; (2) studying the relation to graph parameters such as vertex- or edge expansion; (3) understanding the randomness requirement of rumor spreading.

Projects and Collaborations

Collaborations within the Cluster

  • Tobias Friedrich (now at University Jena, Germany)
  • Martin Hoefer

External Collaborations

  • Hoda Akbari (Simon Fraser University, Canada)
  • Petra Berenbrink (Simon Fraser University, Canada)
  • Paul Bogdan (Carniege Mellon University, USA)
  • Nikolaos Fountoulakis (University of Birmingham, UK)
  • George Giakkoupis (INRIA Rennes, France)
  • Daniel M. Kane (Stanford University, USA)
  • Konstantinos Panagiotou (University of Munich, Germany)
  • Viresh Patel (Durham University, UK)
  • Alexandre Stauffer (Microsoft Research Redmond, USA)
  • Dirk Sudholt (University of Sheffield, UK)
  • Philipp Woelfel (University of Calgary, Canada)
Dr. Thomas Sauerwald

Dr. Thomas Sauerwald

Thomas Sauerwald was head of the Independent Research Group Efficient Algorithms for Massive Graphs within the Cluster of Excellence from August 2012 until June 2013. 

Fon: +44 1223 763538