Comments on: Gestures and Interface: Things To Do With Your Finger https://blog.nearfuturelaboratory.com/2007/08/30/gestures-and-interface-things-to-do-with-your-finger/ Clarify Today, Design Tomorrow Fri, 18 Aug 2017 18:02:58 +0000 hourly 1 https://wordpress.org/?v=5.5.1 By: Julian https://blog.nearfuturelaboratory.com/2007/08/30/gestures-and-interface-things-to-do-with-your-finger/#comment-220 Fri, 31 Aug 2007 07:37:52 +0000 http://www.nearfuturelaboratory.com/2007/08/29/gestures-and-interface/#comment-220 ..on the other hand, I see lots of the same kind of interface reworked and pivoting on the basis of somewhat vague but convenient tropes like “more natural” and such all. Frankly, I think many of these ideas are good ways to think about the possibilities, but often fail horribly for a variety of reasons when they attempt to become something that many many people are given. I have trouble finding one of all the dozens of devices surrounding me that doesn’t fail on a variety of interface accounts.

But, here I’m mostly curious about the ways our fingers and hands are used as interface touch points. And why interface gestures are constrained this way. Where does the gesture “unit” cut off? At the button click? Why is that? I’m just wondering — can walking down the street be a gesture unit? And if it is, how does the model for the interface change? How can you experiment with this kind of “mobile” computing that isn’t just about typing the same sort of keys on a smaller device with a smaller screen doing the same old email, only more awkwardly because you’re walking? Should mobile computing include other kinds of tasks that are more “mobile” than the normal fixed, sitting down desktop style tasks? What would mobile computing be if the tasks were things that were only appropriate in mobile contexts (i.e. not desktop style typing tasks?)

]]>
By: Jeroen Arendsen https://blog.nearfuturelaboratory.com/2007/08/30/gestures-and-interface-things-to-do-with-your-finger/#comment-219 Thu, 30 Aug 2007 08:30:23 +0000 http://www.nearfuturelaboratory.com/2007/08/29/gestures-and-interface/#comment-219 Nice helicopter view. I think we need to keep a clear head when ‘gesture recognition’ is concerned. In a way turning a knob is as much a gesture as dragging a scrollbar or ‘pinching a picture’. The point is whether the interaction between people and technology is smooth or difficult. A traditional usability approach is still required to create effective and efficient interfaces using gestures. Even if gesture supporters may claim ‘a more natural’ interaction style, this is not self-evident. Speech recognition was claimed to be ‘more natural’ too, nut the current state-of-the-art does not allow for a very ‘natural’ interaction. The same goes for video-based or glove-based gesture recognition. But it is still good to develop such technologies further for various reasons. However, as HCI designers, you need to stay critical and sober about these things. The Wii is a good example of well designed technology, where game designers have been able to finetune the motions used for commands in an iterative way (prototype and test, and again). That is expensive but necessary to get the right ‘feeling’. With a dial or a knob that sort of optimisation has long ago been done (though some suppliers will still fuck up the force feedback on a simple button).

]]>