Goodbye Touch! Hello Post-Touch!

Windows 7 Kinect Gestures

Windows 7 Kinect Gestures

You’ve probably heard me rant about the error of putting a touchscreen on every new product. Touch is slick and easy but it’s not for everything. And it’s already going out of style. The new kid in town is a natural user interface – Post-Touch as a number of industry leaders are calling it. Post-Touch is basically an interface that goes past the requirement of touch and can detect gestures usually through some kind of near-field depth camera.

Post-Touch Tech Available Today

Idling - by Pragmagraphr

Idling – by Pragmagraphr

The Kinect is one such technology for Post-Touch interfaces that I’ve written about before. In my current project at the DCOG-HCI Lab at UCSD, we are implementing an augmented workspace that uses the Kinect to detect interactions with the workspace. By tracking users and their hand movements, the workspace can respond in an intelligent manner.

For e.g. have you ever pulled something up on the screen while taking notes only to have the computer assume you are idle and turn off the screen? An augmented workspace could detect that your note-taking posture and gaze and keep the screen on. No hand-waving necessary. Of course, if you are actually idling like this fellow at right, it should indeed turn off the screen.

A few months ago, a new company called Leap Motion caused a stir when they demoed their high resolution gesture-tracking module. Similar to the Kinect in features although allegedly not in technology, it offers much greater levels of sensitivity and control. Check out their video below to see the possibilities of the Leap Motion. The company appears to be building steam and I’m excited to see their first product release!

How will Post-Touch change things?

And here, I defer to the experts. You should read this great article on what the future holds for Post-Touch, but I’ll provide some highlights here.

Post Touch is smaller gestures – Michael Buckwald, CEO of Leap Motion

As screens get larger, the gestures get more tiring. Try pretending to move your mouse around your desktop screen. Now try your flat-screen TV. Unless you want to develop one gorilla arm muscle, that’s going to get real tiring real fast. Post-Touch will scale down those gestures so they’re not dependent on screen size.

Post-Touch Cameras Will Come With Every Laptop – Doug Carmean, Researcher-at-Large for Intel Labs

Wow! This was news to me – Carmean says that as early as next year, Logitech near-field depth cameras are going to show up in laptops. This will be a huge boost to the technology. Everyone who buys a laptop is going to be seeking the software solutions that enable it.

And there’s more, so really, read the article! And tell me what you think below.

You Are The Natural User Interface

Today my boyfriend’s iphone screen cracked, not spontaneously – he dropped it, but the cracked screen reminded me of one depressing fact. The fact, that despite research into Natural User Interfaces and embodied cognition, all these smartphones and tablets are just pictures under glass. Our interactions with them are funneled mostly through one or two fingers. In fact, I’d argue this is a step back from using a mouse and keyboard. Just try coding with a touchscreen keyboard! I dare you. If you haven’t seen Bret Victor’s illuminated rant about Pictures Under Glass, you can read it here.

Olympic Grace for the Rest of Us

With the inspiring Olympic displays of the power and grace of the human body all around us, it’s dreadful that we confine the human body that is capable of this:


to interactions like this:

The One Finger Interface (Image by flickingerbrad)

No Olympic grace there, and the sadder thing is, that poor kid is probably looking at many years of pointing and sliding to come.

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
– Bret Victor

From Bret Victor‘s rant, “The next time you make breakfast, pay attention to the exquisitely intricate choreography of opening cupboards and pouring the milk — notice how your limbs move in space, how effortlessly you use your weight and balance. The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well. With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?”

So what are some interfaces that truly allow us to interact naturally with our environment and still benefit from technology?

Brain Imaging Made Easy

Acryclic plane interface to Brain Imaging software

Acryclic plane interface to Brain Imaging software

In this 1994 paper, Ken Hinckley, Randy Pausch and their colleagues detail a system using an acrylic plane and a doll’s head to help neurosurgeons interact with brain imaging software. The 3D planes that the neurosurgeons need to view are difficult to navigate with a mouse or keyboard, but very intuitive with an acrylic “plane” and a “model” of the head.

From the paper, “All of the approximately 15 neurosurgeons who have tried the interface were able to “get the hang of it” within about one minute of touching the props; many users required considerably less time than this.” Of course, they are neurosurgeons, but I’m guessing it’s very unlikely that they would get the hang of most keyboard and mouse interfaces to this system in about a minute.

This interface was designed in the 90’s, and we’re still stuck on touchscreens!

Blast Motion’s Sensors

Blast sensors analyze your swing

Blast sensors analyze your swing

Blast Motion Inc. creates puck-shaped wireless devices that can be embedded into golf clubs and similar sporting equipment. The pucks can collect data about the user’s swing, speed or motion in general. The data is useful feedback to help the user assess and improve their swing.

I like that the interface here is a real golf club, likely the user’s own club, rather than some electronic measuring device. The puck seems small and light enough not to interfere with the swing, and will soon be embedded into the golf club rather than sold as a separate attachment. I’m interested to see how their product fares when it comes out.

But I Can’t Control my Computer with A Golf Club

Yes, yes, neither of these interfaces can be extrapolated in a general way, but maybe this is a limitation we should be moving away from. Why are we shoehorning a touchscreen interface onto everything? Perhaps we need to look at the task at hand and design the best interface for it, not the best touchscreen UI. The ReacTable is a great example of completely new interface designed for the specific task of creating digital music. (Of course, the app is now available for iOS and Android – back to the touchscreen!) Similarly, the Wii and Kinect have made strides in allowing natural input, but are only recently being considered for serious applications. I really hope that natural interfaces start becoming the norm rather than the exception.

Have you struggled with Pictures Under Glass interfaces for your tasks?
Have you encountered any NUIs (Natural User Interfaces) that you enjoyed (or didn’t)?
Let me know in the comments below.

Hacking the Microsoft Kinect for the Real World

Kinect Technologies by Microsoft

Everyone’s heard of the Microsoft Kinect, the gaming technology which cause a huge stir when it launched in 2010. In fact, in 2011 the Kinect broke the Guinness World record for being the “fastest selling consumer electronics device”, selling 8 million units in its first 60 days.

The Kinect features color and depth cameras that can detect the user’s body and limb positions, without the need for a physical controller. The Kinect also contains an array of microphones which enable voice control in addition to gesture control. In other words, the Kinect can “see” and “hear” you, without your ever having to touch an interface; it’s a Natural User Interface or NUI.

Naturally, this means sports and dance games are very popular in the Kinect line-up, but there’s a whole other line-up that fewer people know about – innovative real world applications that take advantage of the hardware of the Kinect. It’s not that this technology has not been around, but until now it has not been as cheap, robust and accessible as it is now.

Hacker Bounty!

After Adafruit, an open source electronics advocate, offered a bounty for them, open-source drivers for the Kinect showed up mere days after the product release. Since then, enterprising hackers and startups have been coming up with their own non-gaming applications for the Kinect.

Tedesys makes viewing medical data easier during long surgeries

Tedesys, in Cantebria, Spain is developing an interface that helps surgeons during long procedures. Normally, to access necessary medical information during the procedure, the surgeon must leave the sterile environment to use a computer and then scrub back in to the OR. Tedesys’ interface allows doctors to navigate the medical information they need using gesture and voice control, without contaminating the sterile environment.

At the Royal Berkshire Benefit Hospital in the UK, doctors are using the Kinect to make rehabilitation therapy for stroke victims less frustrating. There are no complex controls to worry about and simple games make the rehabilitation exercises enjoyable. The system  improves their strength, control and mobility, as well as tracking their improvement over time.

At the Lakeside Center for Autism in Issaquah, Wash., staff are using the Kinect to help children with Autism work on skill-building, social interactions and motor planning. You can read more about these uses at Kinect Effect.

Microsoft’s Reaction

Until recently, it appeared Microsoft had decided mostly to look the other way as people used their hardware with the non-official hacked drivers. But earlier this year, in a surprising about face, Microsoft decided to jump back into the game by releasing a new version of their technology, called Kinect for Windows, specifically designed for PC applications, with a free SDK and drivers. They also decided to motivate and assist startups using their technology by providing 10 promising finalists with $20,000 to innovate with the Kinect.

Some new startups that were accelerated by this program are :

  • übi interactive can turn any surface into a touchscreen.
  • Ikkos Training aims to use theories of neuroplasticity to train athletes and improve their performance.
  • GestSure allows surgeons to navigate medical data in the OR, as described earlier.
  • Styku creates a virtual “smart” fitting room for retailers.

Frankly, I find these latest uses less exciting than I hoped, but perhaps now that Microsoft has made the Kinect a commercially viable interface, startups will be encouraged to bring their ideas to life. I really do believe that this technology is game-changing; Instead of all our interactions with technology being funneled through physical interfaces like keyboards and mice, we can use our full range of motion in intuitive and ergonomic ways. Once the idea of NUIs really starts to permeate the social consciousness, we will see many innovative uses of this technology.