Evaluating non-standard Menu Design

In Human-Computer inter­action (HCI) re­search – as well as in many other re­search disci­plines – new scien­ti­fic know­ledge and tech­no­lo­gi­cal ad­van­ces are often based on em­pi­ri­cal re­se­arch where new ideas and theo­ries are ex­plo­red through hypo­thesis test­ing and con­trolled exp­eri­ments. How­ever, cri­ti­cal voices within the HCI re­search commun­ity quest­ion the value and ­ use of con­trolled ex­peri­ments in HCI.

In this project we will con­tri­bute to this dis­cussion by re­doing – rep­li­cating – a series of »famous« user ex­peri­ments from the HCI liter­ature. We will focus on ex­peri­ments that have studied the usa­bility of non-standard drop‐down menus and how easy and fast users can navi­gate menu struc­tu­res and se­lect the con­taining menu items.

For this pur­pose, a first ver­sion of a »menu test suite« app­li­ca­tion has been de­vel­oped. After fur­ther de­vel­op­ment and ad­ap­ta­tions we can start re­pli­cating pre­vious menu exp­eri­ments. This in­cludes care­fully study­ing the desc­ri­ptions of the prev­ious ex­peri­ments, then run­ning the ex­peri­ments with a group of com­puter users, and fi­nally analyz­ing our re­sults and com­paring these with pre­viously re­ported results.

Accordingly, in this pro­ject you will ac­quire skills and ex­perience in design­ing, con­duct­ing, and evalu­ating user experiments.

Technologies & Tools: Java, Python, C++, or Objective-C (your choice!), SPSS (for sta­tistical analysis)

 

Contact: Dr. David Ahlström