Together with the Software Engineering Research Group we are responsible for the courses offered in the »Software Engineering« specialization branch of the Applied Informatics bachelor and master programs and we teach and supervise PhD students within the Doctoral Program Informatics.
We also offer several bachelor projects and master projects for students studying Applied Informatics and Information Management. Please see the corresponding lists below for more information about our current teaching activities and for details about the bachelor projects and master projects that we currently offer.
If you are interested in starting a PhD-thesis on a topic related to human-computer interaction and interactive systems, contact Martin Hitz, Gerhard Leitner or David Ahlström and let us know a bit about your background and research interests so that we can discuss the possibilities for a cooperation!
Introduction to Structured and Object-Based Programming (Part 1) (Practical course)course 620.202 details
Introduction to Structured and Object-Based Programming (Part 1) (Practical course)course 620.205 details
Introduction to Structured and Object-Based Programming (Part 2) (Practical course)course 620.222 details
Introduction to Structured and Object-Based Programming (Part 2) (Practical course)course 620.225 details
Arduino is a popular open source hardware and software platform that can be used to »digitalize« physical objects. In this project we will use an Arduino microcontroller and stretch sensors to explore ways to create elastic objects that can be used as computer input devices. In the first project phase you will get familiar with the Arduino Software (IDE) and basic hardware (sensors and controller boards). The work in the second project phase will focus on reading data from a stretch sensor and preparing the data for further use on an Android-smartphone or smartwatch. This project can also be extended into a Master project.
The desired outcome of this project is a software package that provides the functionality that is necessary to conduct user studies on readability. This includes functionality to configure different test cases (defining combinations of various text attributes such as letter size, color, and typeface) and functionality to present the different test cases to study participants, as well as functionality to log the time participants need to read the text in the different test cases.
The outcome of this project is a utility app for macOS and iOS that reads the stored iCal calendar events and provides powerful filtering functionality to let the user easily view and export information about both upcoming events and empty time slots that are available for new meetings.
Technologies & Tools: Java • Python • C++ (your pick!)
In this project you will get familiar with the Leap Motion Controller and its API and then implement a »gesture module« that maps finger and hand tracking information to various interface actions, such as for switching between open desktop windows, copy & paste, scrolling, or zooming.
Today’s smartwatches are equipped with high-quality sensors, such as a gyroscope and an accelerometer. These sensors could open up new possibilities to extend the user’s input vocabulary beyond the smartwatch’s small touchscreen. This project aims at interpreting the sensor signals that are generated when the user gestures in the air or swipes a finger (on the hand where the watch is worn) across an uneven surface and to map signal patterns to input commands.
Line-based input techniques where the user provides input through »drawing« short lines (instead of hitting small rectangular buttons) show promise as alternative input methods on smartphones and smartwatches in constrained usage situations (e.g., when using one hand, in a shaky environment, or completely eyes-free). The objective with this project is to find out how fast and accurate users can draw lines of different lengths in different directions on various devices and screen sizes, both with and without visual access to the screen and the input finger itself.
Technologies & Tools: Java • Python • C++ (your pick!)
In this project you will first get familiar with the Leap Motion Controller and its API and then you will implement a »gesture module« that maps finger and hand tracking information to various interface actions, such as for switching between applications, copy-and-paste, scrolling, and zooming.
In this project you will develop a software package that provides the functionality that is necessary to conduct a user study on screen readability. This includes functionality to configure different test cases (by defining combinations of various text attributes such as letter size, color, and typeface), functionality to present the different test cases to study participants, and to log the time participants need to read text in the different test cases. You will also gain insights and experience in conducting user experiments, statistical analysis, and in using eyetracking technology. Together, we will use your experimental software and conduct one user experiment using our lab’s new eyetracking system from Ergoneers.
In many applications – such as in air-traffic control, in video surveillance, and in computer games – the user needs to quickly and accurately select objects that are moving across the screen. Several previous research projects have proposed various techniques that can assist the user when selecting moving screen objects. The aim with this project is to compare such techniques and to build a theoretical model that mathematically describes and predicts how fast users can select targets that are moving across the screen (depending on the size of the target and its moving speed). A first version of a Java application that provides the necessary functionality to conduct user experiments on selection of moving screen objects has already been developed. In this project you will first extend this application with additional functionality and then design and conduct a user experiment that allows you to 1) verify previously reported research results on the effectiveness of various techniques that support the selection of moving screen objects, and 2) empirically build and verify a predictive performance model that explains how fast users can select moving screen objects.
In Human-Computer Interaction (HCI) research – as well as in many other research disciplines – new scientific knowledge and technological advances are often based on empirical research where new ideas and theories are explored through hypothesis testing and controlled experiments. However, critical voices in the HCI research community question the value and use of controlled experiments in HCI. In this project we will contribute to this discussion by redoing – replicating – a series of »famous« user experiments from the HCI literature. We will focus on experiments that have studied the usability of non-standard drop-down menus and how easy and fast users can navigate menu structures and select the containing menu items. For this purpose, a first version of a »menu test suite« application has been developed. After further development and adaptations we can start replicating previous experiments. This includes carefully studying the descriptions of the previous experiments, then running the experiments with a group of computer users, and finally analyzing the results and comparing these with previously reported results.