5 Reasons Embedded Systems Designers Should Embrace HCI

As an embedded software engineer who is interested in bringing HCI design practice to embedded devices, I often hear people say, “Oh but we don’t have a display on our device, so I’m not sure we need that”. Somehow, the embedded systems community seems to have decided that well designed interactions only apply to software with a GUI. I’d like to argue that’s not true. In fact, embedded devices not only need good interaction design, but they need it more than traditional (computer-based) software.

1. Embedded devices can’t fall back on default interactions

Wacom CintiqTraditional software solutions can always fall back to the screen/keyboard/mouse interaction. For example, when developing image editing software, you can always start with a mouse and keyboard interface, despite the fact that a tablet interface may be better.

But embedded devices are almost defined by their lack of mouse and keyboard. The embedded designer has no default interaction, they need to make the difficult choices up front about what form of input they can support and how they can relay feedback or information.

The embedded designer has no default interaction, they need to make the difficult choices up front.

A very simple example of this is designing a device that uses WiFi for connectivity. Entering a WEP key is challenging under any circumstances, imagine how much more so without a real keyboard. Is it even possible without a display?

2. Embedded devices go everywhere

Fitbit OneThis means they are exposed to the elements, challenging lighting conditions and portability constraints in ways that regular software is not. For e.g. wearable fitness devices often get ruined by sweat which can be quite corrosive to metals. The latest offerings, such as the Fitbit One, are now sweatproof and rainproof, but you can imagine in the early devices, the software developers had to contend with false readings due to moisture on the sensor.

3. Embedded devices are not the center of attention

Of course, they can be. Certainly, the latest phones and tablets garner a lot of attention. But a vast majority of embedded devices have to operate in a context where the user has other tasks and goals taking their attention. Attention is an increasingly precious and scarce resource as more and more of our devices compete for it. A well-designed device should work well in the background, and capture your attention judiciously, if at all.

A well-designed device should work well in the background, and capture your attention judiciously, if at all.

Think about car entertainment systems, they’re best when easily operated, preferably without even having to glance at them, because the user’s main goal is driving, not playing music. In fact, to avoid users taking their hands off the wheel, most newer vehicles provide controls embedded in the steering wheel itself.

4. Embedded devices want you to twist them, pull them, bop them

Remember that old game, Bop It? Twisting, pulling, yes even bopping are affordances. If you haven’t heard the term, Wikipedia says, “An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.”

Keyboards and mice have some affordances but they’ve been well explored. Embedded devices have all sorts of great new affordances, because they’re not tied to a particular form.  They can take advantage of the many ways that we interact with the natural world around us.

Embedded devices enable a whole new universe of interactions that are unexplored solely because the keyboard and mouse didn’t afford them.

A company called Blast Motion creates pucks that can be embedded into golf clubs and tennis rackets to analyze your swing. The interface is simply swinging your golf club. Pinch-to-zoom and multi-touch didn’t really become popular until you had to use your hand to interact with your iPhone. Embedded devices enable a whole new universe of interactions that are unexplored solely because the keyboard and mouse didn’t afford them.

5. Customers care.

Finally, embedded designers should care because their customers do. Back in the mid 2000s, everyone thought the cellphone market in the US was saturated, and the only way to sell phones was to make ultra low cost versions for China and India. Then, Apple came out with the first iPhone in 2007, and all of a sudden, everyone was willing to pay $499 for a cellphone.

Actions that were frustrating before seemed effortless, intuitive… fun, even.

Why? There was nothing special about the hardware and software, technologically speaking. What was special was the interaction experience it gave users. Actions that were frustrating before seemed effortless, intuitive… fun, even. Do you know how many grandparents are happy to use an iPhone? Grandparents! The very same ones that you spend hours setting up Blu-ray players and digital frames for every Christmas.

So, what now?

Well, that’s the longest rant yet. But I absolutely believe that embedded devices are the next frontier for computing. Low power networking, sensing technologies and fast processors are converging right now making a lot of amazing products possible. But these products won’t go far unless we take the next step.

The only way to move the product from the hands of a few early adopters to the masses is to learn about interaction design, to think about users, their context and goals, and iterate the design until the product is an absolute delight to use.

To start, Don Norman’s excellent book The Design of Everyday Things will get you to look at everything around you as a designed interface.
The Interaction Design Encyclopedia is a great resource explaining the terms and concepts.
Scott Klemmer, Stanford professor and HCI star, has a free HCI course on Coursera.
Are you going to start thinking about how to design interactions with your product? Post your thoughts below!

Lean Startup is User Design for Your Business Idea

I had a sudden epiphany yesterday that I’d like to share! Lately, I’ve been reading up about the Lean Startup movement. Lean Startup, pioneered by Eric Ries, is a business model that involves prototyping and testing one’s hypotheses early on, to avoid the high cost of failure.

The premise is simple, you test your intuitions about your idea, your customers and your business plan, before you even start work on implementing the solution. This way you can find the flaws, or even that the original idea is a no-go, early on rather than after you’ve spent your life savings developing it.

Parallels with HCI

So last night, I was watching a bunch of Lean Startup videos online from Ignite, and right after watching these videos, I switched tracks and started watching some lectures on HCI. Scott Klemmer, HCI star and soon-to-be UCSD professor has started a free online course on Coursera. Immediately, I realized the parallels between the two methodologies. HCI and UX recommend prototyping to test ideas, tight design iterations to learn from and change your interface, and developing rich customer personas.

It occurred to me then, that Lean Startup is essentially applying HCI and UX concepts to startups. The product you are designing is your business, including all its components – the solution, who you think your customer is, how you plan to target them , and how you intend to make money. You test all the assumptions you make about who your customer is, what they want and whether your solution provides it.

Just as the developer is not a good judge of an intuitive interface, the entrepreneur is often not the best judge of whether customers will flock to their solution over others. Basically, your customer is your user and the business model you present them with is their interface to your solution.

Other Thoughts

Today, I wake up and find this article from Smashing Magazine posted by my friend, Matt Hong. Clearly, I’m not the only one to see the parallels. The author, Tomer Sharon, goes one step further to say that Lean Startup is just great packaging for UX principles. Having just been introduced to the Lean Startup movement, I can’t verify that, but I can definitely see the similarities. It’s also what draws me to the Lean Startup methodology.

Do you see similarities between the ideas? If you haven’t heard much about one or both, head over and read the article at Smashing magazine.
What are some other HCI concepts that haven’t been applied to the design of a business?

Thumbnail and front page images courtesy of Flickr users betseyweber and krawcowicz respectively.

Goodbye Touch! Hello Post-Touch!

Windows 7 Kinect Gestures

Windows 7 Kinect Gestures

You’ve probably heard me rant about the error of putting a touchscreen on every new product. Touch is slick and easy but it’s not for everything. And it’s already going out of style. The new kid in town is a natural user interface – Post-Touch as a number of industry leaders are calling it. Post-Touch is basically an interface that goes past the requirement of touch and can detect gestures usually through some kind of near-field depth camera.

Post-Touch Tech Available Today

Idling - by Pragmagraphr

Idling – by Pragmagraphr

The Kinect is one such technology for Post-Touch interfaces that I’ve written about before. In my current project at the DCOG-HCI Lab at UCSD, we are implementing an augmented workspace that uses the Kinect to detect interactions with the workspace. By tracking users and their hand movements, the workspace can respond in an intelligent manner.

For e.g. have you ever pulled something up on the screen while taking notes only to have the computer assume you are idle and turn off the screen? An augmented workspace could detect that your note-taking posture and gaze and keep the screen on. No hand-waving necessary. Of course, if you are actually idling like this fellow at right, it should indeed turn off the screen.

A few months ago, a new company called Leap Motion caused a stir when they demoed their high resolution gesture-tracking module. Similar to the Kinect in features although allegedly not in technology, it offers much greater levels of sensitivity and control. Check out their video below to see the possibilities of the Leap Motion. The company appears to be building steam and I’m excited to see their first product release!

How will Post-Touch change things?

And here, I defer to the experts. You should read this great article on what the future holds for Post-Touch, but I’ll provide some highlights here.

Post Touch is smaller gestures – Michael Buckwald, CEO of Leap Motion

As screens get larger, the gestures get more tiring. Try pretending to move your mouse around your desktop screen. Now try your flat-screen TV. Unless you want to develop one gorilla arm muscle, that’s going to get real tiring real fast. Post-Touch will scale down those gestures so they’re not dependent on screen size.

Post-Touch Cameras Will Come With Every Laptop – Doug Carmean, Researcher-at-Large for Intel Labs

Wow! This was news to me – Carmean says that as early as next year, Logitech near-field depth cameras are going to show up in laptops. This will be a huge boost to the technology. Everyone who buys a laptop is going to be seeking the software solutions that enable it.

And there’s more, so really, read the article! And tell me what you think below.

You Are The Natural User Interface

Today my boyfriend’s iphone screen cracked, not spontaneously – he dropped it, but the cracked screen reminded me of one depressing fact. The fact, that despite research into Natural User Interfaces and embodied cognition, all these smartphones and tablets are just pictures under glass. Our interactions with them are funneled mostly through one or two fingers. In fact, I’d argue this is a step back from using a mouse and keyboard. Just try coding with a touchscreen keyboard! I dare you. If you haven’t seen Bret Victor’s illuminated rant about Pictures Under Glass, you can read it here.

Olympic Grace for the Rest of Us

With the inspiring Olympic displays of the power and grace of the human body all around us, it’s dreadful that we confine the human body that is capable of this:

Dancer

to interactions like this:

The One Finger Interface (Image by flickingerbrad)

No Olympic grace there, and the sadder thing is, that poor kid is probably looking at many years of pointing and sliding to come.

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
– Bret Victor

From Bret Victor‘s rant, “The next time you make breakfast, pay attention to the exquisitely intricate choreography of opening cupboards and pouring the milk — notice how your limbs move in space, how effortlessly you use your weight and balance. The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well. With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?”

So what are some interfaces that truly allow us to interact naturally with our environment and still benefit from technology?

Brain Imaging Made Easy

Acryclic plane interface to Brain Imaging software

Acryclic plane interface to Brain Imaging software

In this 1994 paper, Ken Hinckley, Randy Pausch and their colleagues detail a system using an acrylic plane and a doll’s head to help neurosurgeons interact with brain imaging software. The 3D planes that the neurosurgeons need to view are difficult to navigate with a mouse or keyboard, but very intuitive with an acrylic “plane” and a “model” of the head.

From the paper, “All of the approximately 15 neurosurgeons who have tried the interface were able to “get the hang of it” within about one minute of touching the props; many users required considerably less time than this.” Of course, they are neurosurgeons, but I’m guessing it’s very unlikely that they would get the hang of most keyboard and mouse interfaces to this system in about a minute.

This interface was designed in the 90’s, and we’re still stuck on touchscreens!

Blast Motion’s Sensors

Blast sensors analyze your swing

Blast sensors analyze your swing

Blast Motion Inc. creates puck-shaped wireless devices that can be embedded into golf clubs and similar sporting equipment. The pucks can collect data about the user’s swing, speed or motion in general. The data is useful feedback to help the user assess and improve their swing.

I like that the interface here is a real golf club, likely the user’s own club, rather than some electronic measuring device. The puck seems small and light enough not to interfere with the swing, and will soon be embedded into the golf club rather than sold as a separate attachment. I’m interested to see how their product fares when it comes out.

But I Can’t Control my Computer with A Golf Club

Yes, yes, neither of these interfaces can be extrapolated in a general way, but maybe this is a limitation we should be moving away from. Why are we shoehorning a touchscreen interface onto everything? Perhaps we need to look at the task at hand and design the best interface for it, not the best touchscreen UI. The ReacTable is a great example of completely new interface designed for the specific task of creating digital music. (Of course, the app is now available for iOS and Android – back to the touchscreen!) Similarly, the Wii and Kinect have made strides in allowing natural input, but are only recently being considered for serious applications. I really hope that natural interfaces start becoming the norm rather than the exception.

Have you struggled with Pictures Under Glass interfaces for your tasks?
Have you encountered any NUIs (Natural User Interfaces) that you enjoyed (or didn’t)?
Let me know in the comments below.

Fat Fingers and Tiny Buttons

Gordon Gecko on huge mobile phone

Gecko Mobile

Have you noticed that the phone you use now is larger than the phones we used in the last decade? That is, unless you were Gordon-Gecko-rich back in the 80’s, and could afford this fine phone you see him holding here. The truth is that there is a tension between how small we want our mobile devices to be, and how fat our fingers continue to be, despite our very best technological innovations.

Patrick Baudisch

I recently went to see a talk by Patrick Baudisch, an HCI researcher at Hasso Plattner Institute in Berlin. Some of the research he presented focused on reducing the minimum size that our screens need to be for effective use. One of the most challenging aspects of small screens is the tiny buttons. Any buttons on screen need to be large enough to be accurately pressed by a human finger. So how do we reduce that size limitation without exponentially increasing the frustration-factor?
Patrick Baudisch described two creative ideas for breaking down this barrier. I’ll cover one – nano touch, in this post, and the other RidgePad, in an upcoming post.

nano touch

First off, it’s hard to press a button you can’t see, therefore having your finger on top of the screen obscuring the button you are trying to press is a problem. How about placing a touchpad behind the device allowing you to maneuver a pointer on screen without obscuring it? The technology is called nano touch and allows users to complete a variety of actions on very small screens (2.4″ to 0.3″). Here is one video from New Scientist, more are available on the nano touch website.

Why, For The Love Of All That Is Sacred?

You might be asking, why on earth wouldn’t I just play my FPS on my 40 inch plasma at home? Do I really need to play an FPS while driving home from my work computer to my home gaming setup? And I would respond by tagging that question with #firstworldproblems. Patrick points out that a large part of the developing world do not have access to laptops and computers and more simply, large screens. Mobile phones are their point of access to technology.

“There is a single true computation platform for the masses today. It is not the PC and not One Laptop Per Child. It is the mobile phone—by orders of magnitude. This is the exciting and promising reality we need to design for.” – Patrick Baudisch

I saw this myself on a recent trip to India. I was marveling at how young people in India seemed very obsessed with their phones. And not just like teenagers we see here, lost to the world and one with their phone, but instead, groups of young people gathered around one phone. Note, these are not smartphones, they can play music, and maybe download ringtones, but that’s about it.

Indians on cellphones

Photo by meanestindian

So why the excitement, I thought? What could possibly be so engrossing in a cellphone? And then it hit me. For most of these people, this is the first computer they have personally owned. I remember the first computer I owned, I don’t think it could even play much music, just PC beeps with exciting siren effects if you coded the BASIC right. But damn, was it exciting! And that’s why small screens are important, because they bring the magic of computing to a whole segment of society that was mostly skipped over in the PC revolution.

Thanks for reading! Back soon with RidgePad