Project Communications Engineering and Multimedia

In this project, 2-4 people work together on one specific topic related to multimedia signal processing. The students shall work independently and present their results at the end of the project.
If you register for the project course, groups and topics will be assigned during the introduction session. We certainly try our best to give everyone a topic he or she likes, but please be aware that we can not guarantee that you will get one specific topic.


  • Automatic Dartboard Detection
    In this project an automatic dartboard detection via video should be developed. The dartboard should be captured by a camera, eg. Microsoft Kinect or a simple webcam and segmented according to it's different areas. Once segmentation is done the detection of darts should be implemented and automatic point counting should be enabled. Optionally the program could be developed as smartphone application in order to have camera and software combined in one device. Software should be written in C/C++ with OpenCV and rapid prototyping could be done in MATLAB.
    Contact: Thibaut Meyer M.Sc.
  • Motion Compensated Video Coding
    In this project, a basic video codec using motion compensation shall be developed, consisting of an encoder, which compresses a given input video into a bit stream and a decoder, which is able to perfectly reconstruct the video from the bit stream. The block diagram depicted on the right visualizes the general concept of the hybrid video coding scheme. The task of designing a codec can be broken down into three main areas of work: Prediction, Decorrelation and Quantization and Entropy Coding. Instead of "reinventing the wheel", students are encouraged to make use of existing state-of-the-art technologies to solve these challenges, such as optical flow based prediction methods for motion compensation, DCT or KLT for transform coding and arithmetic coding methods such as CABAC or tANS for entropy coding. The software should be written either in C/C++ or Python by a team of at least 2-3 students. Programming experience in either language is highly recommended. The book High Efficiency Video Coding - Coding Tools and Specification by Mathias Wien provides a very good overview of video coding technology and will be used as a basis for this project. The book is available in the Lehrbuchsammlung. Contact: Jens Schneider M.Sc.

  • What are CNNs looking at ?
    Convolutional Neural Networks (CNNs) are the current state-of-the art in image classification tasks. CNNs extract the feature vectors of the images. In this project, students can train a neural network and generate the heatmap of images showing which area of image is contributing more towards the correct prediction of a class which will result in a kind of heatmap generation. Basic understanding of neural networks and image processing and programming skills in python are highly recommended for the successful completion of this project.
    Contact: Abin Jose M.Sc.

  • Person detection using Convolutional Neural Networks
    In recent years Convolutional neural networks (CNNs) have outperformed classical methods not only in classifying, but also in localizing objects in images and videos. One way to do this is retraining the final layers of an existing classification network for the object class one ones to detect and then using the activation maps of one or more layer to determine the location of a bounding box around the object. In this project students should apply these CNN based methods to the task of detecting persons in images from the RAISE dataset using tensorflow.
    Prior knowledge of python or neural networks would be useful or has to be acquired during the project.
    Contact: Maria Meyer M.Sc.
  • Virtual view synthesis using point cloud
    Due to increasing computation power, employing computer graphics tools in real-time digital entertainment is getting more achievable. Virtual view synthesis is one of the interesting tools used in virtual/augmented reality and free viewpoint TV. Virtual or novel view synthesis refers to the technique of synthesizing a virtual view of a scene, from a set of reference views. These reference views are the images already captured by a camera, positioned at different viewpoints around the scene. Actually, Virtual view synthesis tries to warp textures from reference views to the novel view, considering the depth information. In this project, we are going to study and implement a virtual view synthesis method based on a given dense point cloud. It means, we have a dense point cloud representing the 3D structure of the scene and a couple of images taken by a camera (reference views), and we would like to define a virtual camera in 3D space and synthesize the virtual view.
    The software could be written in C/C++ or MATLAB and basic knowledge of image processing or computer vision is recommended.
    Contact: Hossein Bakhshi-Golestani M.Sc.
  • Point cloud segmentation
    With the growing availability of 3D scanners large data sets with millions of points can be generated. Usually it is necessary to reduce data size, decrease noise and enhance the overall quality of the model. Oftentimes it is therefore useful to segment the point cloud data into different planes or structures.
    There are many different approaches for point cloud segmentation. The most popular ones are based either on region growing or split-and-merge.
    In this project, we want to study different segmentation algorithms and try to implement at least one algorithm of choice. Furthermore, a suitable way to visualise the segmented data has to be developed and an oral presentation on the project at the end of the semester is mandatory.
    The software should be written in C/C++ by a team of at least 2-3 students. Programming experience is highly recommended. Substantial functions for reading and visualising point cloud data can be taken from the point cloud library (PCL). Further information on the topic can be found here.
    Contact: Dominik Mehlem M.Sc.
  • Own ideas on topics are welcome!