Compiler Support for Media Applications

Fund Coordinator

Project Description

Multimodal computation is also high-performance computing: Large amounts of data have to be processed within tight time constraints, often in real time. Furthermore, the amount of data to be processed will not remain unchanged; rather, it will increase steadily. Therefore, parallel computing is vital to the success of multimodal computing in the future.

In our MMCI project, we focused on code optimizations to accelerate media applications, especially high-performance graphics systems like real-time ray tracing. One major issue in high-performance rendering systems are the so-called shaders. Shaders calculate various aspects of the generated image, e.g. lighting, color, and so on. The performance of shading is of utmost importance for the whole system, because the renderer spends most of its time inside a shader. Modern graphics systems use programmable shading, i.e. the shader is not hardwired in the system's code but can be provided by the designer of the 3D scene.

Shaders are similar to plugins in other applications, but differ in that they are typically called from the innermost loops of a renderer and therefore are highly performance critical. A shader is always mapped to both a hardware architecture and a renderer. For example, RenderMan's illuminance loop needs to interact with the renderer to query all light sources. However, each renderer has a different API to perform this task. To avoid costly glue code, shader compilers often target one specific renderer. This forces renderer implementors to develop a whole accompanying compiler infrastructure with their renderer. For example, there are at least three different RenderMan compilers publicly available.