Date: February 14, 2000
Speaker:
Ken Hinckley — Microsoft Research

Hear about the latest research in computer input devices and possible directions for future evolution of the mouse. Ken Hinckley, a research scientist at Microsoft, will discuss several exploratory projects that examine possible future extensions to the capabilities of computer mice, other common input devices, and the graphical user interface. His objective is to simplify and enhance user interfaces by extending the vocabulary of physical actions that the user can actively express, as well as those that can be passively sensed by the computer.

For example, the TouchMouse is a prototype mouse that can sense when the user touches or releases the mouse. Touch-sensing devices such as the TouchMouse use unobtrusive capacitance sensors to detect contact from the user’s hand. Touch-sensing is perhaps most interesting when considered as a passive input channel. For example, it can be used to adapt the user interface depending on what the user is doing: toolbars can fade out when the user is not using the mouse, or special-purpose displays can appear when the user touches a dedicated secondary device. It can adapt the user interface depending on what the user is doing: toolbars can fade out when the user is not using the mouse, or special-purpose displays can appear.

The VideoMouse, explores possible new uses foz a mouse, which has a camera as its input sensor. By default, this mouse behaves just like a regular mouse, but it is also capable of sensing motion in 3D, including regular 2D mouse motion, tilt in the forward/back and left/right axes, rotation of the mouse about its vertical axis, and some limited height sensing. Thus, the familiar 2D mouse can be extended for three-dimensional manipulation, while remaining suitable for standard 2D cursor positioning tasks. Ken will describe techniques for basic mouse functionality, 3D manipulation, navigating large 2D spaces, and using the camera for lightweight scanning tasks.

This research is being pursued in collaboration with Mike Sinclair and others at Microsoft Research and has been published in the ACM CHI’99 and UIST’99 conferences.

Ken Hinckley is a Research Scientist at Microsoft Research. Ken’s research interests include human-computer interaction, input devices & interaction techniques, mobile devices, and experimental studies of human abilities.

Prior to starting at Microsoft Research, Ken earned his PhD working with Randy Pausch in the Department of Computer Science at the University of Virginia, in collaboration with the Neurosurgical Visualization Lab of the Virginia Neurological Institute, and the Proffitt perception lab of the Department of Psychology. Ken’s thesis work explored two-handed interaction with spatial input devices in the “doll’s head interface”, which was used by neurosurgeons for pre-surgical visualization of volumetric medical data.