Publications of my research and work.
Abstract: This article presents a first step towards the definition of a visual guide for communicating uncertainty which is to fit into existing visualisation frameworks and toolkits. The first entry in our guide is made by a set of visual variables appropriate for representing areal uncertainty in algorithm mechanics. Such visualisations show users how data points are distributed in the classification space and allow them to understand the “goodness-of-fit” of their data to the algorithm. [more]This is important for Visual Analytics applications, which combine Information Visualisation with information mining techniques in an interactive decision-making process. Model uncertainties stemming from widely spread data points need to be visualised so that the user can make adjustments and improve the analysis. To capitalise on established knowledge and meaning, we explore whether popular visual variables for representing areal uncertainty in the domain of geospatial visualisation may also be effective for representing uncertainty in the visualisation of the mechanics of K-means clustering and Linear Regression algorithms, as both use a spatial distribution of data points. In a study with 500 participants we find that overall the visual means opacity performs best, followed by texture, but that grid and blur may be unsuitable for quantifying uncertainty. The performance of contour lines appears to depend on the algorithm visualisation. Using this study, we extend the validity of a set of domain-specific findings from geospatial visualisation to the visualisation of algorithm mechanics and use these to form the first building blocks of a cross-disciplinary visual guide for representing uncertainty, laying promising foundations for future work.Download the preprint version of the article
Abstract: Recent efforts in recommender systems research focus increasingly on human factors that affect acceptance of recommendations, such as user satisfaction, trust, transparency, and user control. In this paper, we present a scalable visualisation to interleave the output of several recommender engines with human-generated data, such as user bookmarks and tags. [more]Such a visualisation enables users to explore which recommendations have been bookmarked by like-minded members of the community or marked with a specific relevant tag. Results of a preliminary user study (N=20) indicate that effectiveness and probability of item selection increase when users can explore relations between multiple recommendations and human feedback. In addition perceived effectiveness and actual effectiveness of the recommendations as well as user trust into the recommendations are higher than a traditional list representation of recommendations.Download the preprint version of the article
Abstract: Recent efforts in recommender systems research focus increasingly on human factors that affect acceptance of recommendations, such as user satisfaction, trust, transparency, and user control. In this paper, we present a scalable visualisation to interleave the output of several recommender engines with human-generated data, such as user bookmarks and tags. [more]Such a visualisation enables users to explore which recommendations have been bookmarked by like-minded members of the community or marked with a specific relevant tag. Results of a preliminary user study (N=20) indicate that effectiveness and probability of item selection increase when users can explore relations between multiple recommendations and human feedback. In addition perceived effectiveness and actual effectiveness of the recommendations as well as user trust into the recommendations are higher than a traditional list representation of recommendations.
Abstract: Targeted advertising reaches users based on various traits, such as demographics or behaviour. However, users are often reluctant to accept ads. We hypothesise that users are more open to targeted advertising if they can inspect, control and thereby understand the process of ad selection. We conducted a between-subjects study (N=200) to investigate to what extent four key aspects of ads (Quality, Behavioural Intention, Understanding and Attitude) may be affected by transparency and user control using a flow chart. [more]Our results indicate that positive effects of flow charts reported from other domains may also be applicable to advertising: Using flow charts to provide transparency together with user control is found to have more positive effects on domain-specific quality measures than established, text-based approaches and using either of the techniques in isolation. The paper concludes with recommendations for practitioners aiming to improve user response to ads.
Karsten Seipp and Katrien Verbert. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA, 525-540. DOI: http://dx.doi.org/10.1145/2851581.2892573
Abstract: Gestures in HCI often have a meaning in the real world or are specifically designed for an application. They have a definition and purpose. We introduce Null Gestures: Bodily utterances that have no clearly defined purpose or meaning, such as rubbing one’s chin while thinking. They exist, but their assignment is “Null”. [more]Using the computer, we help users unlock their potential by giving them a meaning in the human-computer dialogue. We thus hope to instigate a discussion about their potential use in HCI and the role of the computer as an enabler for the discovery of unused motor abilities.
Abstract: We present a technique that allows distinguishing between index finger and thumb input on touchscreen phones, achieving an average accuracy of 82.6% in a real-life application with only a single touch. [more]We divide the screen into a virtual grid of 9x9mm units and use a dedicated set of training data and algorithms for classifying new touches in each screen location. Further, we present correlations between physical and digital touch properties to extend previous work.
Abstract: One-handed operation of touchscreen smartphones presents challenges such as hard-to-reach targets and the thumb occluding the interface. There are two main approaches to address these challenges: Modification of the graphical user interface (GUI) and extension of the device's input modalities using its sensors. Previous work has presented techniques addressing a specific problem in isolation, but has failed to provide one solution which tackles all main challenges of thumb interaction together. [more]This thesis examines whether this can be done. To establish the background, the thesis finds that users prefer convenience over efficiency and confirms that they predominantly use one hand. To detect mode of operation, the thesis presents an approach to classify a user's finger with a high degree of accuracy using a single touch. Following the first research avenue, the thesis presents a thumb-optimised GUI that increases usability and efficiency of one-handed website operation. Following the second avenue of research, the thesis presents a novel one-handed input technique for smartphones, using a set of three off-screen gestures. Both approaches address the most common problems of one-handed smartphone operation via the thumb largely successfully, but fail to completely solve the problem of interface occlusion. The thesis adds to the literature in the field of visual perception, input classification, GUI optimisation, and input techniques. Readers learn that visual search strategies of the desktop world may also apply to the mobile world and that eye gaze position may have a greater impact on target acquisition time than Fitts's law. The one-touch finger classification technique provides an additional layer of context and new opportunities for improving the human-machine dialogue. The thumb-optimised GUI presents practitioners with a potential blueprint for translating classical WIMP UI elements into thumb-friendly touch interfaces while the novel input technique provides a new layer of complexity for off-screen interaction.
Karsten Seipp and Kate Devlin. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services (MobileHCI '14). ACM, New York, NY, USA, 77-80. Sept 23-26 2014, Toronto, ON, Canada.
Abstract: We present BackPat - a technique for supporting one-handed smartphone operation by using pats of the index finger, middle finger or thumb on the back or side of the device. [more]We devise a novel method using the device's microphone and gyroscope that enables finger-specific gesture detection and explore efficiency and user acceptance of gesture execution for each finger in three user studies with novice BackPat users.
See a video of the technique.
Donload the author version of the paper
Abstract: We present BackPat: A technique for supporting one-handed smartphone operation. Using pats of either the index finger or middle finger or thumb on the back or side of the device, the user can extend one-handed use in a variety of difficult tasks. [more] We explain the principle behind the technique and make a first attempt at examining its usability and versatility by implementing it into four applications, covering text selection, reaching distant targets, multiple file selection, and map and image zoom. An initial user study has shown a high grade of acceptance, verified the interaction logic and highlighted improvements in task-completion time over non-enhanced interaction. This way we hope to encourage discussion about its usefulness and potential.
Abstract: Touchscreen smart phones can be operated in portrait (P) and landscape (L) orientation. However, whether a device is faster to operate in P or L and where to put a button in each layout for best findability and operability remains unclear. This research makes a first attempt to examine in which orientation a touch-operated interface is faster to use and whether certain "zones" can be identified that have a particularly good performance in either orientation. [more]Our results indicate that such zones exist in both L and P, and that L is faster to use than P. However, the effects are only visible when the user has not been primed with the target name. We conclude our study with practical advice for designers to improve usability and efficiency of time-critical applications and dialogues.
Abstract: Operating a website with one hand on a touchscreen mobile phone remains difficult despite advances in hardware and software development. This problem is exacerbated by manufacturers producing phones with larger screens which are more difficult to hold and operate one-handedly. We present a way to enhance one-handed operation of a website using standard client-side web technologies, without the need to redesign the site or to overwrite any CSS styles. It transforms input for form elements, media control and page access on the fly into a thumb-friendly interaction model. [more]Initial user testing of our interface prototype confirms efficiency and learnability, and highlights its usefulness for navigating long pages and finding the desired information more quickly, even between different websites, when operating the device with one hand.
Part of my M.A. final project. Although created and launched in 2007, it was published in 2010 in the Web Design Index Vol 9, Guenter Beer, Pepin Press, Amsterdam.The Web Design Index provides a yearly round-up of the latest trends in web design, recording the 1000 most innovative websites.