Demonstrators

To make meaningful progress towards the stated goals and to keep our research focused, we have chosen to showcase our results by means of the following demonstrators. The demonstrators integrate our research across different research areas and research groups, and they serve to showcase our research to industry and the public.

Software Integration Platform (SIP)

The goal of the Software Integration Platform (SIP) is to offer a stable, robust, and tested software platform for the demonstrators and research areas of the cluster. The current demonstrator enables researchers of the MMCI cluster to make their research results directly available to the research community as well as the general public. It greatly simplifies the use of crowd-sourcing and related approaches for user tests and other research activities. The availability of a stable software platform also encourages uptake of our results by research and industry.
>> read more ...

>> read more ...

Multimodal In-Car Dialog Demonstrator

A multimodal human-machine interface that fuses haptic interaction between the user and the car with speech, gaze, and gesture recognition: that has been the goal. The scenario has been worked on intensively at DFKI for several years. The resulting demonstrator, based on a Mercedes R-Class SUV, which has been donated to DFKI by Daimler AG, is a continuation of long-standing cooperation with M. Pinkal's group on multimodal in-car dialog (the TALK project). Cluster-internal collaborations related to the demonstrator included the improvement of a free-text dictation system in embedded mobile scenarios (D. Klakow); investigation of the influence of verbal references to buildings on the gaze behavior of a driver (M. Crocker); the use of advanced video encoding protocols for mobile live streaming via UMTS/HSUPA (T. Herfet); and the study of the use of animated characters in the car for persuasive systems (M. Kipp's independent research group).

Open Science Web Demonstrator

The long-term objective has been to automatically build and maintain comprehensive knowledge bases about entities and their semantic classes, relationships between entities, associated multimodal information such as photos, and informative cross-linkage with Web sites and Web 2.0 sources. Hence, this line of demonstrators builds on scientific results from research area 5, and connects them with results from research areas 1 and 3 and the independent research groups of Hannah Bast, Sebastian Michel, and Ralf Schenkel. Building on earlier work in Gerhard Weikum's group, which has developed methodologies for automatically constructing a large knowledge base from Wikipedia and other Web sources, the researchers coped with challenges regarding search and exploration, interactive reasoning, discovery and visualization and multimodal knowledge.

Virtual Environment Demonstrator

The overall goal has been to develop a comprehensive technology platform, integrating as much of the technology developed within the cluster as is meaningful and possible, while in turn providing useful tools for research within the cluster. The result is a demonstrator used in a variety of internal collaborations inside the cluster, including the visualization of 3D structures in computational biology and drug design (A. Hildebrandt), semantic virtual environments (M. Klusch), intelligent agents for virtual environments (K. Fischer), linking gaze and speech (M. Crocker), navigating virtual environments with speech-based instructions (A. Koller, M. Schröder), gesture-based interaction (M. Kipp), performance capture for virtual environments (C. Theobalt), and hybrid verification (W. Stephan, A. Nonnegart). It also serves as a platform for interactions with the public sector (cultural heritage, urban planning) and for collaborations with industry.