Project Communications Engineering and Multimedia

General Information

In this project, 2-4 people work together on one specific topic related to multimedia signal processing. The students shall work independently and present their results at the end of the project. 
If you register for the project course, groups and topics will be assigned during the introduction session. We certainly try our best to give everyone a topic he or she likes, but please be aware that we can not guarantee that you will get one specific topic.

Topics

Automated Video Analysis for Virtual Reality Studies

VR studies routinely generate large amounts of video data, which are often evaluated manually. This process is time-consuming and difficult to scale. In addition, participants typically wear head-mounted displays (HMDs), which partially occlude the face and make behavioral analysis more challenging. Recent advances in machine learning enable automated analysis and classification of video data even with these constraints. This project combines experimental VR research, signal processing, and machine learning to develop and evaluate methods for automatic video-based analysis.

In this project, students will develop a prototype for live video analysis in a VR context. They are expected to have some experience using Python and Git. Tasks include research and planing, data preprocessing, model training and evaluation, and implementation of a system that can classify video input as well as building a graphical user interface to showcase there prototype. The topic of the analysis could, for example, be the movement patterns to infer possible emotional states, or the estimation of a VR user’s emotions while wearing an HMD thorugh face tracking. The specific application can be defined. The prototype should also provide a meaningful visualization of the results and take temporal data into account. Students will work in a team, present their prototype and give an oral presentation at the end of the project.

Contact person: Ulrik Deneken M.Sc.

Evaluation of Consumer-Grade 3D Reconstruction Pipelines for VR Scene Generation

Design, implement, and evaluate (objectively) room reconstruction pipelines.

Contact person: Abhinav Bhattacharya M.Sc.

A Configuration Tool for User-Avatars in Virtual Reality

When interacting with each other in virtual environments, a user’s tracked movements are typically represented through avatars. Research has shown that these first-person representations influence not only the reactions of others in the virtual space, but also participants’ own behaviours and attitudes.
Currently, most user representations used in VR research utilize commercial third-party tools, which often cater to a limited target user group or type of activity in the virtual space. In addition, these tools can change their options or terms of service at any point, rendering study setups unusable and harder to follow/replicate.

In this project, students will develop their own avatar configuration tool in Unity using C#, which allows users to adapt their appearance while immersed in a Virtual Environment via a Meta Quest 3 Head-Mounted Display. Throughout the project, students will develop ideas on the type of options their avatar creator should feature, motivated, e.g., by a specific scenario, user group, or activity. They will then implement these options and appropriate interaction methods in their own configuration tool and gather user feedback for the finished prototype. In the final project presentation, the students should be able to present their usable prototype as well as the rationale behind their configuration options and the feedback they received from their peers.

The focus of the avatar changes will be decided together with the project group, with possible topics including:

Contact person: Dr. rer. nat. Pauline Bimberg

Own ideas on topics are welcome as well!