Virtual Environments Demonstrator

Virtual environments provide an almost ideal demonstrator for research on multimodal computing and interaction. On the one hand, a wide range of expertise across different modalities and disciplines is required and must be integrated to build the underlying software platform (including vision, graphics, simulation, animation, gesture, speech, text and others).

On the other hand, the availability of a virtual environment platform itself provides the basis for performing research in multimodal technology, e.g. for virtual experiments, providing repeatable and controllable multimodal input and context. Our approach combines visual simulation with sophisticated post processing of existing footage like images, video, or 3D scans. Consequently, the overall goal of the virtual environments demonstrator has been to develop a comprehensive technology platform, integrating as much of the technology developed within the cluster as meaningful and possible, while in turn providing useful tools for research within the cluster. A key decision for the VE demonstrator has been to base it almost entirely on Web technologies.

The demonstrator is used in a variety of internal collaborations inside the cluster, including the visualization of 3D structures in computational biology and drug design (A. Hildebrandt), semantic virtual environments (M. Klusch), intelligent agents for virtual environments (K. Fischer), linking gaze and speech (M. Crocker), navigating virtual environments with speech-based instructions (A. Koller, M. Schröder), gesture-based interaction (M. Kipp), performance capture for virtual environments (C. Theobalt), and hybrid verification (W. Stephan, A. Nonnegart). It also serves as a platform for interactions with the public sector (cultural heritage, urban planning (G. Demme)) and for collaborations with industry.

A modified version of Firefox showing the Wikipedia page of Venice with a newly embedded D scene of the Palazzo Contarini using XML3D


Architecture of the Demonstrator

Collaborations Using the Demonstrator

Conclusions and Future Work