Computers are powerful cognitive tools. Just as a wrench allows you to tighten a bolt in ways that you could not with your bare hands, a computer allows you to create content and think thoughts that would otherwise be impossible. This is not a new phenomenon: every new type of media, since the appearance of writing and the alphabet has provided a valuable extension to humans, and changed society. Think about the impact of written and counting.
The introduction of computing is a hugely radical development in this respect; however, our ability as humans to take advantage of the possibilities of computation is highly dependent on our ability to communicate with computers and to integrate them in our perceptual and cognitive processes. This is precisely what input and output devices do, and what we must further explore to enhance what we can achieve. In short, I explore ways to increase the bandwidth of communication between humans and computing devices and networks.
I am active in several sub-areas of this large endeavor: gaze- and pose-contingent displays, multi-touch, pointing and gestural interfaces and haptic devices.
Gaze- and Pose-contingent Displays
Our physiological and neurological structures determine how we see. The eye is a mechanical perceptual device that needs to be oriented towards specific parts of the surrounding environment and focused at the right depth to be able to perceive information effectively. Together with Michael Mauderer we explore ways in which computers that know where you are looking can provide better information. For example, by simulating accommodation blur, humans can perceive depth in monocular flat screens.
I have also extensively explored how taking into account the position and pose of people with respect to the digital environment, we can provide more efficient interaction and visual perception. This work has also led me to research what is the importance of displayless space, and how to provide environments where there is almost no displayless space (Ubiquitous Cursor).
Multi-touch Interfaces
As opposed to the traditional point and click interfaces of traditional desktop computers, the new multi-touch interfaces provide a much richer way to communicate with the computer. Specifically, instead of about 2 degrees of freedom available through the mouse (x and y position), multi-touch interfaces can provide 2 degrees of freedom per finger. This allows us to perform multi-finger rotation and pinching to manipulate objects more flexibly. However, this also often comes at a cost, because you might want to manipulate an object in only one of the many possible ways (e.g., rotate without scale), this is the problem of separability of spatial manipulations.
I do believe that multi-touch interfaces allow for interesting new ways of interacting with digital content. For example, I have investigated how multi-touch interaction that resembles the physical world can be leveraged to interact with node-link diagrams. In this area I am also interested in comparing how tangible and multi-touch interfaces differ from each other in terms of embodiment (going back to the interface as a tool idea!).
Pointing and Gestural Interfaces
One of the revolutions from the last decade in terms of interaction is the ability to act from a distance on content that is on a display. This is motivated by the problems of touch and physical interaction;
- You might not be able to reach all the areas of the display;
- You might not want to (or have the time to) physically displace yourself to where the content is;
- You might be working in a group, and you probably want your co-workers to understand what you are doing, and which objects you are working with.
Remote indirect interaction with a mouse tends to be a better solution in these cases in terms of performance (e.g., by using a proper mouse interaction technique), but the mouse is not ideal if you are away from stable surfaces, and if you want other people to see what you are doing. Although remote pointing is unstable, it is great to maintain awareness (people can interpret very easily what you are doing and in which area of the workspace). It is also interesting in that it allows you to interact with whatever you see, but it turns out that it can also make interaction difficult if parallax caused by the distance between your pointer and your eyes is large, specially in large displays from a close position.
In-air gestures are also a very interesting (and quickly growing) way to interact with devices. In-air gestures have the following advantages (some of those are shared with touch gestures):
- You don’t have to be next to the display to trigger the actions
- You do not need to take space on the screen real estate (unlike menus and tools)
- Multiple people can easily use gestures simultaneously to interact with a device
- There is a virtually infinite set of gestures that can be created to trigger any number of actions.
However, gestures also have their problems. Namely:
- Gestures need to be recognised accurately by a gesture classifier
- Gestures need to be understandable by humans
- Gestures need to be remembered by people
I have studied the memorability of gestures, and found that the most memorable gestures are the ones that you create yourself. I have also studied the kind of gestures that people invent. Additionally, I have helped create touch interfaces that work with physical gestures to interact with graphs.
Haptic Input/Output Devices
Creating interfaces that can provide haptic input and output is very attractive from an experience point of view. From one of our studies we know that touching a screen is not the same thing as grabbing an object. Haptics are also interesting because they very closely combine input and output within the same action-perception loop. However, haptic interfaces are notoriously difficult to build and to program, and usually very expensive.
By designing the Haptic Tabletop Puck, we provide an inexpensive way to create closed-loop haptic perceptions. The device (see its description, and a video – with the initial wooden version of the hardware) is similar to a mouse, but uses absolute positioning and a single-pixel haptic loop. Coupled with a tabletop display it provides a very powerful way to explore haptic interactions for very little. Additionally, we created and tested a toolkit that allows to program it in easy ways.