5 Reasons Embedded Systems Designers Should Embrace HCI

As an embedded software engineer who is interested in bringing HCI design practice to embedded devices, I often hear people say, “Oh but we don’t have a display on our device, so I’m not sure we need that”. Somehow, the embedded systems community seems to have decided that well designed interactions only apply to software with a GUI. I’d like to argue that’s not true. In fact, embedded devices not only need good interaction design, but they need it more than traditional (computer-based) software.

1. Embedded devices can’t fall back on default interactions

Wacom CintiqTraditional software solutions can always fall back to the screen/keyboard/mouse interaction. For example, when developing image editing software, you can always start with a mouse and keyboard interface, despite the fact that a tablet interface may be better.

But embedded devices are almost defined by their lack of mouse and keyboard. The embedded designer has no default interaction, they need to make the difficult choices up front about what form of input they can support and how they can relay feedback or information.

The embedded designer has no default interaction, they need to make the difficult choices up front.

A very simple example of this is designing a device that uses WiFi for connectivity. Entering a WEP key is challenging under any circumstances, imagine how much more so without a real keyboard. Is it even possible without a display?

2. Embedded devices go everywhere

Fitbit OneThis means they are exposed to the elements, challenging lighting conditions and portability constraints in ways that regular software is not. For e.g. wearable fitness devices often get ruined by sweat which can be quite corrosive to metals. The latest offerings, such as the Fitbit One, are now sweatproof and rainproof, but you can imagine in the early devices, the software developers had to contend with false readings due to moisture on the sensor.

3. Embedded devices are not the center of attention

Of course, they can be. Certainly, the latest phones and tablets garner a lot of attention. But a vast majority of embedded devices have to operate in a context where the user has other tasks and goals taking their attention. Attention is an increasingly precious and scarce resource as more and more of our devices compete for it. A well-designed device should work well in the background, and capture your attention judiciously, if at all.

A well-designed device should work well in the background, and capture your attention judiciously, if at all.

Think about car entertainment systems, they’re best when easily operated, preferably without even having to glance at them, because the user’s main goal is driving, not playing music. In fact, to avoid users taking their hands off the wheel, most newer vehicles provide controls embedded in the steering wheel itself.

4. Embedded devices want you to twist them, pull them, bop them

Remember that old game, Bop It? Twisting, pulling, yes even bopping are affordances. If you haven’t heard the term, Wikipedia says, “An affordance is a quality of an object, or an environment, which allows an individual to perform an action. For example, a knob affords twisting, and perhaps pushing, while a cord affords pulling.”

Keyboards and mice have some affordances but they’ve been well explored. Embedded devices have all sorts of great new affordances, because they’re not tied to a particular form.  They can take advantage of the many ways that we interact with the natural world around us.

Embedded devices enable a whole new universe of interactions that are unexplored solely because the keyboard and mouse didn’t afford them.

A company called Blast Motion creates pucks that can be embedded into golf clubs and tennis rackets to analyze your swing. The interface is simply swinging your golf club. Pinch-to-zoom and multi-touch didn’t really become popular until you had to use your hand to interact with your iPhone. Embedded devices enable a whole new universe of interactions that are unexplored solely because the keyboard and mouse didn’t afford them.

5. Customers care.

Finally, embedded designers should care because their customers do. Back in the mid 2000s, everyone thought the cellphone market in the US was saturated, and the only way to sell phones was to make ultra low cost versions for China and India. Then, Apple came out with the first iPhone in 2007, and all of a sudden, everyone was willing to pay $499 for a cellphone.

Actions that were frustrating before seemed effortless, intuitive… fun, even.

Why? There was nothing special about the hardware and software, technologically speaking. What was special was the interaction experience it gave users. Actions that were frustrating before seemed effortless, intuitive… fun, even. Do you know how many grandparents are happy to use an iPhone? Grandparents! The very same ones that you spend hours setting up Blu-ray players and digital frames for every Christmas.

So, what now?

Well, that’s the longest rant yet. But I absolutely believe that embedded devices are the next frontier for computing. Low power networking, sensing technologies and fast processors are converging right now making a lot of amazing products possible. But these products won’t go far unless we take the next step.

The only way to move the product from the hands of a few early adopters to the masses is to learn about interaction design, to think about users, their context and goals, and iterate the design until the product is an absolute delight to use.

To start, Don Norman’s excellent book The Design of Everyday Things will get you to look at everything around you as a designed interface.
The Interaction Design Encyclopedia is a great resource explaining the terms and concepts.
Scott Klemmer, Stanford professor and HCI star, has a free HCI course on Coursera.
Are you going to start thinking about how to design interactions with your product? Post your thoughts below!

Lean Startup is User Design for Your Business Idea

I had a sudden epiphany yesterday that I’d like to share! Lately, I’ve been reading up about the Lean Startup movement. Lean Startup, pioneered by Eric Ries, is a business model that involves prototyping and testing one’s hypotheses early on, to avoid the high cost of failure.

The premise is simple, you test your intuitions about your idea, your customers and your business plan, before you even start work on implementing the solution. This way you can find the flaws, or even that the original idea is a no-go, early on rather than after you’ve spent your life savings developing it.

Parallels with HCI

So last night, I was watching a bunch of Lean Startup videos online from Ignite, and right after watching these videos, I switched tracks and started watching some lectures on HCI. Scott Klemmer, HCI star and soon-to-be UCSD professor has started a free online course on Coursera. Immediately, I realized the parallels between the two methodologies. HCI and UX recommend prototyping to test ideas, tight design iterations to learn from and change your interface, and developing rich customer personas.

It occurred to me then, that Lean Startup is essentially applying HCI and UX concepts to startups. The product you are designing is your business, including all its components – the solution, who you think your customer is, how you plan to target them , and how you intend to make money. You test all the assumptions you make about who your customer is, what they want and whether your solution provides it.

Just as the developer is not a good judge of an intuitive interface, the entrepreneur is often not the best judge of whether customers will flock to their solution over others. Basically, your customer is your user and the business model you present them with is their interface to your solution.

Other Thoughts

Today, I wake up and find this article from Smashing Magazine posted by my friend, Matt Hong. Clearly, I’m not the only one to see the parallels. The author, Tomer Sharon, goes one step further to say that Lean Startup is just great packaging for UX principles. Having just been introduced to the Lean Startup movement, I can’t verify that, but I can definitely see the similarities. It’s also what draws me to the Lean Startup methodology.

Do you see similarities between the ideas? If you haven’t heard much about one or both, head over and read the article at Smashing magazine.
What are some other HCI concepts that haven’t been applied to the design of a business?

Thumbnail and front page images courtesy of Flickr users betseyweber and krawcowicz respectively.

Goodbye Touch! Hello Post-Touch!

Windows 7 Kinect Gestures

Windows 7 Kinect Gestures

You’ve probably heard me rant about the error of putting a touchscreen on every new product. Touch is slick and easy but it’s not for everything. And it’s already going out of style. The new kid in town is a natural user interface – Post-Touch as a number of industry leaders are calling it. Post-Touch is basically an interface that goes past the requirement of touch and can detect gestures usually through some kind of near-field depth camera.

Post-Touch Tech Available Today

Idling - by Pragmagraphr

Idling – by Pragmagraphr

The Kinect is one such technology for Post-Touch interfaces that I’ve written about before. In my current project at the DCOG-HCI Lab at UCSD, we are implementing an augmented workspace that uses the Kinect to detect interactions with the workspace. By tracking users and their hand movements, the workspace can respond in an intelligent manner.

For e.g. have you ever pulled something up on the screen while taking notes only to have the computer assume you are idle and turn off the screen? An augmented workspace could detect that your note-taking posture and gaze and keep the screen on. No hand-waving necessary. Of course, if you are actually idling like this fellow at right, it should indeed turn off the screen.

A few months ago, a new company called Leap Motion caused a stir when they demoed their high resolution gesture-tracking module. Similar to the Kinect in features although allegedly not in technology, it offers much greater levels of sensitivity and control. Check out their video below to see the possibilities of the Leap Motion. The company appears to be building steam and I’m excited to see their first product release!

How will Post-Touch change things?

And here, I defer to the experts. You should read this great article on what the future holds for Post-Touch, but I’ll provide some highlights here.

Post Touch is smaller gestures – Michael Buckwald, CEO of Leap Motion

As screens get larger, the gestures get more tiring. Try pretending to move your mouse around your desktop screen. Now try your flat-screen TV. Unless you want to develop one gorilla arm muscle, that’s going to get real tiring real fast. Post-Touch will scale down those gestures so they’re not dependent on screen size.

Post-Touch Cameras Will Come With Every Laptop – Doug Carmean, Researcher-at-Large for Intel Labs

Wow! This was news to me – Carmean says that as early as next year, Logitech near-field depth cameras are going to show up in laptops. This will be a huge boost to the technology. Everyone who buys a laptop is going to be seeking the software solutions that enable it.

And there’s more, so really, read the article! And tell me what you think below.

You Are The Natural User Interface

Today my boyfriend’s iphone screen cracked, not spontaneously – he dropped it, but the cracked screen reminded me of one depressing fact. The fact, that despite research into Natural User Interfaces and embodied cognition, all these smartphones and tablets are just pictures under glass. Our interactions with them are funneled mostly through one or two fingers. In fact, I’d argue this is a step back from using a mouse and keyboard. Just try coding with a touchscreen keyboard! I dare you. If you haven’t seen Bret Victor’s illuminated rant about Pictures Under Glass, you can read it here.

Olympic Grace for the Rest of Us

With the inspiring Olympic displays of the power and grace of the human body all around us, it’s dreadful that we confine the human body that is capable of this:

Dancer

to interactions like this:

The One Finger Interface (Image by flickingerbrad)

No Olympic grace there, and the sadder thing is, that poor kid is probably looking at many years of pointing and sliding to come.

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?
– Bret Victor

From Bret Victor‘s rant, “The next time you make breakfast, pay attention to the exquisitely intricate choreography of opening cupboards and pouring the milk — notice how your limbs move in space, how effortlessly you use your weight and balance. The only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well. With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?”

So what are some interfaces that truly allow us to interact naturally with our environment and still benefit from technology?

Brain Imaging Made Easy

Acryclic plane interface to Brain Imaging software

Acryclic plane interface to Brain Imaging software

In this 1994 paper, Ken Hinckley, Randy Pausch and their colleagues detail a system using an acrylic plane and a doll’s head to help neurosurgeons interact with brain imaging software. The 3D planes that the neurosurgeons need to view are difficult to navigate with a mouse or keyboard, but very intuitive with an acrylic “plane” and a “model” of the head.

From the paper, “All of the approximately 15 neurosurgeons who have tried the interface were able to “get the hang of it” within about one minute of touching the props; many users required considerably less time than this.” Of course, they are neurosurgeons, but I’m guessing it’s very unlikely that they would get the hang of most keyboard and mouse interfaces to this system in about a minute.

This interface was designed in the 90’s, and we’re still stuck on touchscreens!

Blast Motion’s Sensors

Blast sensors analyze your swing

Blast sensors analyze your swing

Blast Motion Inc. creates puck-shaped wireless devices that can be embedded into golf clubs and similar sporting equipment. The pucks can collect data about the user’s swing, speed or motion in general. The data is useful feedback to help the user assess and improve their swing.

I like that the interface here is a real golf club, likely the user’s own club, rather than some electronic measuring device. The puck seems small and light enough not to interfere with the swing, and will soon be embedded into the golf club rather than sold as a separate attachment. I’m interested to see how their product fares when it comes out.

But I Can’t Control my Computer with A Golf Club

Yes, yes, neither of these interfaces can be extrapolated in a general way, but maybe this is a limitation we should be moving away from. Why are we shoehorning a touchscreen interface onto everything? Perhaps we need to look at the task at hand and design the best interface for it, not the best touchscreen UI. The ReacTable is a great example of completely new interface designed for the specific task of creating digital music. (Of course, the app is now available for iOS and Android – back to the touchscreen!) Similarly, the Wii and Kinect have made strides in allowing natural input, but are only recently being considered for serious applications. I really hope that natural interfaces start becoming the norm rather than the exception.

Have you struggled with Pictures Under Glass interfaces for your tasks?
Have you encountered any NUIs (Natural User Interfaces) that you enjoyed (or didn’t)?
Let me know in the comments below.

Design and Disaster Onboard Air France Flight 447

Today I read a fascinating although tragic article on the investigation of the Air France Flight 447 crash of 2009. Early in the morning on June 1st, the Air France flight from Rio de Janeiro to Paris crashed in a storm, somewhere in the ocean between Brazil and Africa. It took days for them to find the aircraft and its 228 passengers, of which there were no survivors, and years to find the flight recorders. Finally, these flight recorder have revealed the pilots’ last conversations and they struggled to recover control of the aircraft.

Design Changes and Disaster

Aircraft cockpit

Aircraft cockpit

What they found was that Airbus‘ newer cockpit design makes it harder for pilots to use the feedback mechanisms traditionally available to help them assess and control emergency situations. This made it hard for them to comprehend what was happening to the aircraft as it stalled.

In the past, the pilot had to hold down a control (called a side-stick) to cause the plane to climb or descend. Side-sticks in newer Airbus aircraft can be set to a certain angle and left there. When the less-experienced, and seemingly disoriented, younger pilot pulled the nose of the plane up and left it there, the other pilots were unaware he had done this. They struggled to understand why the plane was climbing until it was too late to recover.

“We still have the engines! What the hell is happening? I don’t understand what’s happening.
– David Robert, Air France Flight 447 Pilot

The side-stick redesign was a popular change because it reduced pilot fatigue and generally did not need to be used when the plane was in auto-pilot. However, in the stressful emergency situation about Flight 447, when the pilots needed to fly the plane manually, it meant that their shared understanding of the flight environment, their distributed cognition, was hindered by the lack of feedback.

Distributed Cognition in the Cockpit

In safety critical applications like a flight deck, it is very important that the design of the interface support the pilots’ shared understanding of the flight environment as well as each others’ intentions and actions, rather than hinder it.

Autothrust Throttle

Autothrust Throttle

Another example of the interface hiding feedback from pilots is the design of the auto-thrust feature. Similar to cruise control in a car, the engine thrust is adjusted automatically to keep the plane at a particular speed, however the adjustments are not reflected by the position of the controls. Again, pilots do not have the tangible feedback from the levers that they had in the past.

According to the article, Airbus defends these changes as part of their design philosophy. Boeing, on the other hand, has a busy and cluttered cockpit interface where every control is manual, despite being electronically managed behind the scenes. The manual controls provide a tangible interface, and even move the levers to reflect automatic adjustments.

This story shows us that that even the smallest change in such a safety-critical interface can have disastrous effects. This doesn’t mean that changes can’t be made but that, as engineers, we need to take into consideration the cognitive load our designs place on users, and in this case, on the distributed cognition of the whole scenario.

Human-Computer Interaction in Your Car

Pioneer AppRadio

Pioneer AppRadio

More and more, apps and touchscreens are being incorporated into our car dashboards, and without this sort of analysis, it’s only a matter of time before the accident rate starts rising.

While driving, I much prefer the tactile dash interface that allows me to change radio stations or operate the air conditioning without so much as a glance, to the touchscreen interface on my phone when I change a song or try to answer it.

What about you? Have you felt the cognitive strain of trying to handle newer, more complex interfaces while driving? Do you think design should ever be placed above safety?

Hacking the Microsoft Kinect for the Real World

Kinect Technologies by Microsoft

Everyone’s heard of the Microsoft Kinect, the gaming technology which cause a huge stir when it launched in 2010. In fact, in 2011 the Kinect broke the Guinness World record for being the “fastest selling consumer electronics device”, selling 8 million units in its first 60 days.

The Kinect features color and depth cameras that can detect the user’s body and limb positions, without the need for a physical controller. The Kinect also contains an array of microphones which enable voice control in addition to gesture control. In other words, the Kinect can “see” and “hear” you, without your ever having to touch an interface; it’s a Natural User Interface or NUI.

Naturally, this means sports and dance games are very popular in the Kinect line-up, but there’s a whole other line-up that fewer people know about – innovative real world applications that take advantage of the hardware of the Kinect. It’s not that this technology has not been around, but until now it has not been as cheap, robust and accessible as it is now.

Hacker Bounty!

After Adafruit, an open source electronics advocate, offered a bounty for them, open-source drivers for the Kinect showed up mere days after the product release. Since then, enterprising hackers and startups have been coming up with their own non-gaming applications for the Kinect.

Tedesys makes viewing medical data easier during long surgeries

Tedesys, in Cantebria, Spain is developing an interface that helps surgeons during long procedures. Normally, to access necessary medical information during the procedure, the surgeon must leave the sterile environment to use a computer and then scrub back in to the OR. Tedesys’ interface allows doctors to navigate the medical information they need using gesture and voice control, without contaminating the sterile environment.

At the Royal Berkshire Benefit Hospital in the UK, doctors are using the Kinect to make rehabilitation therapy for stroke victims less frustrating. There are no complex controls to worry about and simple games make the rehabilitation exercises enjoyable. The system  improves their strength, control and mobility, as well as tracking their improvement over time.

At the Lakeside Center for Autism in Issaquah, Wash., staff are using the Kinect to help children with Autism work on skill-building, social interactions and motor planning. You can read more about these uses at Kinect Effect.

Microsoft’s Reaction

Until recently, it appeared Microsoft had decided mostly to look the other way as people used their hardware with the non-official hacked drivers. But earlier this year, in a surprising about face, Microsoft decided to jump back into the game by releasing a new version of their technology, called Kinect for Windows, specifically designed for PC applications, with a free SDK and drivers. They also decided to motivate and assist startups using their technology by providing 10 promising finalists with $20,000 to innovate with the Kinect.

Some new startups that were accelerated by this program are :

  • übi interactive can turn any surface into a touchscreen.
  • Ikkos Training aims to use theories of neuroplasticity to train athletes and improve their performance.
  • GestSure allows surgeons to navigate medical data in the OR, as described earlier.
  • Styku creates a virtual “smart” fitting room for retailers.

Frankly, I find these latest uses less exciting than I hoped, but perhaps now that Microsoft has made the Kinect a commercially viable interface, startups will be encouraged to bring their ideas to life. I really do believe that this technology is game-changing; Instead of all our interactions with technology being funneled through physical interfaces like keyboards and mice, we can use our full range of motion in intuitive and ergonomic ways. Once the idea of NUIs really starts to permeate the social consciousness, we will see many innovative uses of this technology.