Graspable interfaces (Fitzmaurice et al., 1995)

From XPUB & Lens-Based wiki
Revision as of 12:39, 2 December 2021 by Martin (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Graspable interfaces (Fitzmaurice et al., 1995) link

Notes

What is a graspable interface

" Graspable User Interfaces, an evolution of the input mechanisms used in graphical user interfaces (GUIs). A Graspable UI design provides users concurrent access to multiple, specialized input devices which can serve as dedicated physical interface widgets, affording physical manipulation and spatial arrangements."

Graspable interface / GUI (graphic user interface)

" With conventional GUIs, there is typically only one graphical input device, such as a mouse. Hence, the physical handle is necessarily "time-multiplexed," being repeatedly attached and unattached to the various logical functions of the GUI. A significant aspect of the Graspable UI is that there can be more than one input device. Hence input control can then be "space-multiplexed." That is, different devices can be attached to different functions, each independently (but possibly simultaneously) accessible. This, then affords the capability to take advantage of the shape, size and position of the physical controller to increase functionality and decrease complexity(...) By using physical objects, we not only allow users to employ a larger expressive range of gestures and grasping behaviors but also to leverage off of a user's innate spatial reasoning skills and everyday knowledge of object manipulations."

"As our computer tasks become more complex, intricate and demanding, we may benefit by having access to specialized physical tools and redefining how such tools interact with the underlying software. This is the topic explored in this thesis. The user interface that results, we call the "Graspable User Interface."

"In the simplest definition, a Graspable User Interface is a physical handle to a virtual function where the physical handle serves as a dedicated functional manipulator. The term Graspable UI refers to both the ability to physically grasp an object (i.e., placing a hand on an object) as well as conceptual grasping (i.e., to take hold of intellectually or to comprehend). At the very least, Graspable UIs can serve as physical embodiments and representations of common graphical user interface elements (such as file icons, windows, menus or push buttons). As well, Graspable UIs have the potential to aid users in manipulating abstract representations of objects or functions on a display."

"Graspable UIs provide users concurrent access to multiple, specialized input devices which can serve as dedicated physical interface widgets, affording physical manipulation and spatial arrangements. Like conventional graphical user interfaces (GUIs), physical devices function as "handles" or manual controllers for logical functions on widgets in the interface. "

Only one action at a time

" Furthermore, the Graspable UI design provides for a concurrence between space-multiplexed input and output. Traditional GUIs have an inherent dissonance in that the display output is often space-multiplexed (icons or control widgets occupy their own space and must be made visible to use) while the input is time-multiplexed (i.e., most of our actions are channeled through a single device, a mouse, over time). Therefore, only one user driven, graphical manipulation task can be performed at a time, as they all use the same transducer. The resulting interaction techniques are often sequential in nature and mutually exclusive. Graspable UIs attempt to overcome this. "

Space-multiplexed or Time-mutiplexed

"The primary principle behind Graspable UIs is to adopt a space-multiplexed input design. Input devices can be classified as being space-multiplexed or time-multiplexed."

Space-muliplexed = "With space-multiplexed input, each function to be controlled has a dedicated transducer, each occupying its own space. For example, an automobile has a brake, clutch, throttle, steering wheel, and gear shift which are distinct, dedicated transducers controlling a single specific task."

Time-mutliplexed = "In contrast, time-multiplexing input uses one device to control different functions at different points in time. For instance, the mouse uses time-multiplexing as it controls functions as diverse as menu selection, navigation using the scroll widgets, pointing, and activating "buttons."

Direct Manipulation

"(...) direct manipulation is often a primary goal for many interface designers. Shneiderman describes direct manipulation interfaces as having the following three properties:

  • Continuous representation of the object of interest.
  • Physical actions or labeled button presses instead of complex syntax.
  • Rapid incremental reversible operations whose impact on the object of interest is immediately visible [Shneiderman, 1982, p. 251].

GUI haven't evolved much

Conventional graphical user interfaces (GUIs) are based on the concept of direct manipulation. However, we argue that the level of directness and manipulation for GUIs have not evolved or changed much in the last ten years. We still use a keyboard and mouse, with icons and menus and our gestural vocabulary ranges from a small set of actions such as point, click and drag. Has the GUI reached its final evolution? "`

Graspable function vs Graspable device

"(...) a graspable function consists of a specialized physical input device which is bound to a virtual function and can serve as a functional manipulator. "

Reduce digital widget, sate information and application data -> Make widgets physical

"he visual channel becomes taxed, the space-multiplex input style may offload some of the visual demands onto the underutilized tactile or motor systems. Many sophisticated software packages make intense use of the visual channel to display user interface widgets, state information, and application data. Even more use of the visual channel is used for software packages that operate on 3D data. Here the idea is to transform some of the virtual UI widgets and functionality onto physical widgets. This process frees up some of the valuable screen space, reducing the need to display static UI widgets and instead display more application data."