Input Methods on User Interfaces

Description

Note on Input Methods on User Interfaces, created by dcardosods on 10/04/2015.
dcardosods
Note by dcardosods, updated more than 1 year ago
dcardosods
Created by dcardosods about 9 years ago
31
0

Resource summary

Page 1

Input devices can be specific or general, where the former is a device design to perform an specific task and the latter is designed to be able to perform a variate, not predetermined number of tasks. They also are either continuous or discrete, and have different sensing methods, mechanical, motion, contact or signal processing. There are Text Input devices, which include the well used keyboards (Qwerty, Dvorak, Soft/Virtual, Chording and some variants) as well as Text Recognition and Gestural Recognition for inputting text. Other common type of input devices are the ones called Position Input devices, e.g, mouse, track-pad, stylus pen, touchscreen, joystick, which can be sensed by force or displacement in several dimensions, i.e. 1, 2 or 3, use position or rate control, be mapped to absolute or relative positions from direct or indirect contact.

Input Devices

Input Performance

"sInput performance can be measured by different mathematical models. Keystroke Level Mode (KLM) is one of them. It applies a constant time for different actions in a sequence describing a such task. K = keystroke, P = pointing, B = button press on mouse, H = hand move form mouse to/from keyboard and M = mental preparation. K value may differ depending on the level of expertise of the user, i.e, how many words per minute (WPM) the user is able to perform. M can be used to distinguish between novice and expert, and is attributed to many things, e.g. looking in the memory for what to do next, before start a task, etc. It has the benefit to be very simple to use, even without a functional system, but has some drawbacks, such as out of date constant time and the non-consideration of learning time and user expertize. Another model is know as Fitt's Law, which applies only to pointing devices. It considers time, distance and target size to calculate the movement time (MT), where the distance is the distance (D, aka A for amplitude) between the starting point and the center of the target and the size (W) is the constraining size of the target, i.e, min (width, height). There is also the Steering Law, which is an adaptation of the Fitt's Law, that considers movement between boundaries (kinda a movement path). Besides these models, is important to know about the differences between Motor Space and Screen Space; for example, changing the speed of a cursor when it gets close to a button changes the motor space (makes the button "sticky"), even though the size of the button doesn't change.

Touch

Touch is a direct type of input that can be implemented by different technologies. There are the Resistive displays, which consists of 2 lays of screen with a gap between them that when pressed will register the position where they touch each other. There are the Capacitive displays, which consist of electrical grids with sensors in the corners that detect changes in current to determine the position of a touch. They enable multi-touch. More uncommon and /or experimental technologies are Optical, Inductive and Pixel Sense. The use of bare hands (fingers) and pen-like devices (Stylus) are the most common ways of interaction with touch displays. Stylus is very precise, good for drawing for example, but provides just one contact point and is an intermediate device between the user and the display, requires some thinking from the user (user has to carry it, find it and grab it to use, can forget at home, etc). Using fingers is much more intuitive and it's always there to the user to use. However, it still has some problems, like the lack of hover state (important in providing a preview of an action), ambiguous feedback (if an action doesn't work the user doesn't know if it's his/her fault, if it's a problem with the application or the machine, etc), multi-touch capture (so many possible combinations; intentional or not?; etc) and the "fat finger problem" because of target occlusion and action impression. Big companies have it say on the perfect target size, but they're not consistent. Apple advices 15mm, Microsoft 9mm (7mm minimal, 2mm of spacing) and Nokia 10mm (7mm minimal, 1mm of spacing).The interaction with touch interfaces is different from the familiar WIMP (windows, icons, menus and pointing). It's Direct Manipulation, which means that the actions are expressed more like in the real world (users say they "interact with the task, not the interface"); for example, dragging a file to the trash, sizing a box pinching it, etc. It brings challenges as well. Analogies are unclear, it's not always interoperable with keyboard/mouse and it may be not accessible. It gets even more complicated if one wants to provide gesture based controls, e.g., what would be the gesture for copy/paste? for close?, how many gestures is reasonable for the user to remember?, how to handle multiple gestures?, etc. With mobile devices, we need to think how to design for touch. We should consider the variety of device sizes, orientation, resolutions. It has to be responsive and make the users' job easy.

Touchless Interfaces

Touchless interfaces are all about sensing the user's environment. There are 11 "things" to be sensed, i.e., occupancy and motion, range (e.g. distance), position, movement and orientation, gaze, speech, in-air gesture, identity, context, affect brain-wave activity. There are 3 sensing phases, pro-processing, feature selection and classification. To work on these sensors, we have to work around the Balancing Act: whether the interaction is explicit or implicit, how to deal with false positive errors, i.e., when the system senses something and perform an action without the user's will, how to deal with false negative errors, i.e., when the users has the intention to perform an action but the systems doesn't recognizes. Users usually need to feel in control and may be intolerant to errors. To solve this challenges there are some strategies, such as gracefully degrade the system on failures, ask confirmation from users (may reduce false positives) and provide a fallback/manual way to control the system. Touchless is a one-state input device, that is always turned on, which may be problematic. Solutions are to always provide feedback during the interaction, allow use of other input models concurrently, use reserved actions (indication of start and end of a gesture).We may dream how would it be to move form a GUI world to SUI (Speech User Interface) world. It is easy to imagine a lot of challenges. The lack of visual feedback, ambiguous silence, how to deal with dialog boxes/prompts, how to interpret "human" language, how to not overwhelm the user with information in quantity (hard to absorb words when listen than when reading). Also, we'd have to deal with recognition errors: rejection (can be decreased with progressive assistance), insertion, substitution.

Show full summary Hide full summary

Similar

User Prototyping UX/UI Design Quiz
jglawson
User Interfaces
Skeletor
45. People Pay Attention Only to Salient Cues
Jesus Zepeda
46. People Can't Actually Multitask
Jesus Zepeda
User Prototyping for UX/UI Design Flash Cards
jglawson
Polymer 2.0 - Custom Element - Register Element
Ravi Upadhyay
...More on Salient Cues
Jesus Zepeda
47 DANGER, FOOD, SEX, MOVEMENT, FACES, AND STORIES GET THE MOST ATTENTION.
Jesus Zepeda
Technical Proposal
Rebbecca Stanley
Polymer: Custom Elements
Ravi Upadhyay
Stadium Dashboard
Zack Franks