Research - Video Coding

In the area of video coding, we are working on algorithms for efficient compression as well as robust transmission of video data. Besides the appropriate representation of the content, the focus also lies on a practical application (e.g. scalability). Our work is closely related to the standardization efforts for video coding algorithms in Joint Collaborative Team on Video Coding (JCT-VC) and in Joint Collaborative Team on 3D Video Coding Extension Development (JCT-3V) of the ITU-T Visual Coding Experts Group (VCEG) and the ISO/IEC Moving Pictures Experts Group (MPEG). We take active part in this standardization process by numerous contributions. The following section gives a short introduction to the main areas of research at the institute.

New Compression Algorithms

The standardization of video compression has two main objectives: Increasing the compression efficiency on the one hand and reducing the computational complexity on the other hand. In general, these two goals cannot be achieved simultaneously since higher compression efficiency usually results in increased complexity.

For a new algorithmic approach, we normally aim for the maximum performance regarding the achievable quality to bit rate ratio. In a second step, the necessary complexity is then reduced by applicable simplifications.

Robust Video Transmission

Modern commonly used packet switched networks are often not able to provide a constant quality of service. Reliable and efficient high-quality video transmission therefore relies on an application optimized error control. A special class of transmission systems comprises of applications where the transmission between sender and receiver is possible over multiple channels. By using a clever combination of these channels, we can lay the foundation for robust video transmission.

The error protection measures may include multiple protocol layers and its practical implementation depends on the specific requirements of each application. Possible measures include the combination of scalar video coding with channel coding (unequal error protection), using a channel adaptive source coding (multiple description coding, error resilient coding), and strategies for an adaptive retransmission if packets are lost (automatic repeat request).

Scalable Video Coding

In Scalable Video Coding the idea of video coding is extended, so that the video bitstream contains multiple versions of the same video. These versions (also referred to as layers) can differ in spatial resolution (e.g., 1080p and 4k), the temporal resolution (e.g., 24 and 48 pictures per second) or the quality of the reconstruction. These layers are then arranged in packets with a header that easily identify the layer (or the importance) of the packet.

This approach enables some interesting features. For example while streaming a video the packets of higher layers can be dynamically discarded. While this lowers the quality of the reconstructed video, it also reduces the required bitrate. This way the bitrate of the stream can be dynamically adapted to the available channel while the video can be decoded without interruption. Another scenario is backwards capability. For example a stream can contain a 1080p Layer and a 4k Layer. While the 1080p Layer can be decoded by older devises, the 4k Layer can be decoded by current devices.

In order to lower the overall bitrate that this scalable scheme requires, the dependencies between the layers are exploited as good as possible. For example in the scenario with two spatial layers (1080p and 4k), the layers differ in spatial resolution but contain the same video frames. So when encoding the 4k layer the 1080p layer can be used as a prediction for the frame in the 4k layer. This inter layer information flow is also not limited to reconstructed pixels but can include all information that is utilized for video coding (e.g., motion information, mode information, residual signals).

Contact: Dipl.-Ing. Christian Feldmann

3D Technologies in Video Coding

When several views/cameras exist for the same scene, they can be projected to one another. Here, the views can be taken at the same time (several cameras) or at different times (single, moving camera). It is investigated, whether the relations between the different views can be exploited in order increase the compression efficiency of 2D or 3D video. By adding depth maps to 2D video data, it is possible to synthesize arbitrary views of a scene. However, occlusions and distortions can occur in the synthesized view. Further, adding depth to the data only is sensible if it is used for displaying it, since in increases the size of the video. For 2D video, methods need to be developed which can exploit relations between different views without transmitting the depth.

Contact: Johannes Sauer M.Sc.


The institute's video coding group consists of Dipl.-Ing. Max Bläser, Olena Chubach M.Sc., Johannes Sauer M.Sc., Uday Thakur M.Sc. and Dr.-Ing. Mathias Wien.

Are you interested? There are many possible topics for Bachelor and Master theses in the area of video coding: Bachelorarbeiten / Masterarbeiten.
Moreover, we offer student jobs in the field of video coding, which are described at Studentische Hilfskräfte.