RidgePad – Touch Detection with Smarts

The second creative idea attacking screen size limitations comes from Christian Holz and Patrick Baudisch at the Hasso Plattner Institute in Berlin. They contend that the problem is not our fat fingers, but the lack of intelligence used to detect the target of the touch. Think of it this way, sometimes you click a button with the pad of your finger, sometimes with the tip. Every time you do either, you are likely off the target by a certain amount. But currently, touchscreens have no way of telling which part of your finger you used and don’t attempt to compensate for how far off you might be. They just use the center of the area of contact, which is quite inaccurate.

Generalized Perceived Input Model

Generalized Perceived Input Model by Holz and Baudisch

Holz and Baudisch ran some experiments showing that users typically hit a variety of different points when trying to touch a target as shown in the image labeled (a). As you can see, the center of the areas they touched (solid ovals) were not on target but tended to be offset from the target. On the other hand, the dashed oval shows that the offsets are similar and could be intelligently compensated for.

How do we do this? First, they propose using a Generalized Perceived Input Model to compensate for the offset. Then, using a touchscreen with the sensitivity of a fingerprint detector, we can detect the pose of the finger using the print as shown in (b). With that information, we can make more intelligent decisions about what the target of the touch was.

The more accurately we can detect touches, the smaller we can make our screens while still being usable. Using the new model and fingerprint detection, Holz and Baudisch can obtain 1.8 times higher accuracy than capacitive sensing. In addition, it enables new interactions that can detect new gestures such as rolling and pointing. This is one more step toward smaller yet more intuitive touchscreens and fewer missed buttons.

Fat Fingers and Tiny Buttons

Gordon Gecko on huge mobile phone

Gecko Mobile

Have you noticed that the phone you use now is larger than the phones we used in the last decade? That is, unless you were Gordon-Gecko-rich back in the 80’s, and could afford this fine phone you see him holding here. The truth is that there is a tension between how small we want our mobile devices to be, and how fat our fingers continue to be, despite our very best technological innovations.

Patrick Baudisch

I recently went to see a talk by Patrick Baudisch, an HCI researcher at Hasso Plattner Institute in Berlin. Some of the research he presented focused on reducing the minimum size that our screens need to be for effective use. One of the most challenging aspects of small screens is the tiny buttons. Any buttons on screen need to be large enough to be accurately pressed by a human finger. So how do we reduce that size limitation without exponentially increasing the frustration-factor?
Patrick Baudisch described two creative ideas for breaking down this barrier. I’ll cover one – nano touch, in this post, and the other RidgePad, in an upcoming post.

nano touch

First off, it’s hard to press a button you can’t see, therefore having your finger on top of the screen obscuring the button you are trying to press is a problem. How about placing a touchpad behind the device allowing you to maneuver a pointer on screen without obscuring it? The technology is called nano touch and allows users to complete a variety of actions on very small screens (2.4″ to 0.3″). Here is one video from New Scientist, more are available on the nano touch website.

Why, For The Love Of All That Is Sacred?

You might be asking, why on earth wouldn’t I just play my FPS on my 40 inch plasma at home? Do I really need to play an FPS while driving home from my work computer to my home gaming setup? And I would respond by tagging that question with #firstworldproblems. Patrick points out that a large part of the developing world do not have access to laptops and computers and more simply, large screens. Mobile phones are their point of access to technology.

“There is a single true computation platform for the masses today. It is not the PC and not One Laptop Per Child. It is the mobile phone—by orders of magnitude. This is the exciting and promising reality we need to design for.” – Patrick Baudisch

I saw this myself on a recent trip to India. I was marveling at how young people in India seemed very obsessed with their phones. And not just like teenagers we see here, lost to the world and one with their phone, but instead, groups of young people gathered around one phone. Note, these are not smartphones, they can play music, and maybe download ringtones, but that’s about it.

Indians on cellphones

Photo by meanestindian

So why the excitement, I thought? What could possibly be so engrossing in a cellphone? And then it hit me. For most of these people, this is the first computer they have personally owned. I remember the first computer I owned, I don’t think it could even play much music, just PC beeps with exciting siren effects if you coded the BASIC right. But damn, was it exciting! And that’s why small screens are important, because they bring the magic of computing to a whole segment of society that was mostly skipped over in the PC revolution.

Thanks for reading! Back soon with RidgePad

The Invisible Jogger

Photo by dafydd359

Today I had an interesting conversation with An Yu, a cognitive science major at UCSD. We were discussing the potential applications for understanding synchrony. Another graduate student we know has done some work on researching people’s ability to synchronize with each other. They measured how well they could keep a simple beat, even when the beat went away, and we were wondering what this could be applied to.

An said she’d noticed that when people walked near each other, you could tell in subtle ways whether they were friends or not based on how they synchronized with each other. I’ve noticed it too. In fact, I’ve found if you walk too closely synchronized with other people, they start giving you funny looks. You’re expected to walk faster or slower than them if you don’t know them, it’s weird if you walk at the same speed right next to or behind them.

This whole conversation reminded me of a very interesting idea called Jogging Over a Distance; a research project of Floyd Mueller, Shannon O’Brien and Alex Thorogood. The idea is to give runners living in different cities the feeling of running with a friend through spatial awareness. The runners wear headsets connected to each other over a mobile connection. They can talk to each other and the system processes the audio to give the runners a sense of running in front of and behind their friend. GPS is used to calculate their relative speeds. Basically it gives casual runners the sense of social interaction and pace which can be very motivating and enjoyable.

In fact, your simulated pace can be varied so that you could run with someone who is actually faster than you and still feel like you are running with them – this is similar to a handicap in some sports. With this feature, not only could you run with a friend in another city, but even one with whom you would normally not be a compatible running partner – someone much faster or slower.

I’m not a runner, but I find this idea quite fascinating. I wonder especially if the experience is intuitive for runners or if they will constantly be struggling to synchronize with their invisible jogging buddy. Also, what happens when one needs to stop at an intersection? Does your buddy need to stop as well? Or can you hit the catch up button? I’d probably be the one to hit the catch up button!

Visualizing Photography

Think of the thousands of photos on your computer, gigs of music etc… You’re not the only one, businesses and the government generate massive quantities of data such as statistics or financial information. Information visualization is a new way of tackling the massive stream of data that is produced on a daily basis.

To see some interesting examples, you can check David McCandless’ site at http://informationisbeautiful.net

Since I love photography, I am interested in being able to visualize the data associated with photography. So many software engineers are also avid photographers and so much data (exif, tagging, geodata) but surprisingly, there is not much visualization work available in the field of photography.

However I found one that I quite like from Eric Fischer. It’s a visualization of local and tourist photo geodata using the Flickr API. The blue marks represent photos taken by locals, the red marks by tourists. Here’s the full version for San Diego.

Tourists and Local Map of San Diego

Tourist & Local Map: San Diego. By Eric Fischer

As you can see, the major tourist destinations in the city are Balboa Park, the Zoo, downtown, the Gaslamp quarter, Coronado and SeaWorld. You’d expect the local data to point out secret, cool, sightseeing spots that only locals know about. But what I noticed is that it actually shows you the best places to hit the bars, University St. between Hillcrest and 30th, Adams Ave., Newport St. in Ocean Beach and Garnet St. in Pacific Beach.

If you think about it, most people are most likely to take photos when out with friends at a bar. In fact, looking closely at South Park, where I live, the one dense blue dot on the map clearly points at Juniper and 30th, the location of the Whistlestop Bar, Station Tavern and Burgers and Rose Wine Pub.

How cool is that? In a new city, you could use the red areas to decide what to do during the day, and the blue areas to find the best nightlife. This sort of unexpected information becoming apparent is the best part of information visualization.

Some of these patterns might become more obvious if this map were animating the data over time. Animated over a day, you’d be able to see the blue marks light up at night and make that relationship more obvious. Animated over a year, you could see the best times to visit (or more likely, avoid!) the major tourist destinations.

To see a wonderful TED talk on information visualization, check out David McCandless’ talk.


When SDGE and Google Powermeter provided a web interface for energy consumption back in 2010, I found myself hopelessly irritated and helpless with the lack of usable information. I could see that my power consumption had spiked a week earlier at 6pm but I had no idea what caused it. The tool provided more information than I had access to before, but just enough to be more frustrating than useful.

One unique way in which our environment can interact with us is by providing information that we are not able to access through our normal senses or abilities, an example of which is power consumption. With the increasing cost and consequences of high energy usage, we all want to reduce our footprint but how? What we need is better information to make good decisions.

Google Powermeter. Image from Mapawatt.

Google Powermeter. Image from Mapawatt.

What’s On?

Enter ElectriSense – a solution for detecting and classifying electrical devices in the home. Basically, ElectriSense is a component you would plug into one power outlet and be able to detect the switching on and off of multiple different devices throughout your home. It uses the unique pattern of noise that SMPS or Switch Mode power supplies (the brick on most of your power adapters) generate on the home’s power wiring.

Demo Interface from ElectriSense

Demo Interface from ElectriSense

This research comes from Duke University’s UbiComp Lab, from Sidhant Gupta, Matthew Reynolds and Shwetak Patel, and the implications are fascinating. A product with the ElectriSense technology and wifi could communicate with a display in your home, or even your computer or cellphone, and provide information on the current level of power consumption. It could collect usage information and allow you to look back at periods of high consumption to figure out what appliances were running at the time. My exasperating Powermeter problems would be solved! One of the most compelling features of this technology is that it requires no installation and can simply be plugged in to one power outlet – just one!

Of course, the one issue that I can see would be privacy. Many homes have one outlet outside the house and a malicious person could plug this product in that outlet to obtain not only a list of electronics within the home, but also information on when the house is likely unoccupied. However, I suppose every new technology comes with new privacy issues and since this one requires access to the house, it is limited in its scope.

If your interest has been peaked, you can read an interview with one of the publishers of this paper, Shwetak Patel, who has gone on to win the MacArthur Genius Award. He’s full of great ideas for the future of the home. In addition, Belkin has acquired his company in 2010 so watch out for some innovative ElectriSense products to hit the shelves.

The Alpha and Omega Post

So this is the fated first post, the one most blogs begin and end with. To avoid this phenomenon, I’ve already written my second post!

I decided to start this blog to post interesting articles and innovations in the area of Ubiquitous Computing, Human Computer Interaction and Embedded Systems in general. I’m interested in how humans use computers and how we can design better products to make this interaction seamless, enjoyable and useful. Computers are everywhere these days, but I especially like the ones that become a part of our lives without our necessarily paying attention to them.

Attention Suckers Suck!

We’ve all had the experience of being at a party and suddenly it seems everyone is playing Words with Friends or texting somebody else, and it’s not a party anymore. I go home every evening and hate that I’m either in front of a computer screen or a TV screen, both of which completely consume my attention. Sometimes, I get my laptop out in front of the TV, so I can be doubly totally consumed.

On the other end of the spectrum, I just found this simple app on my cellphone (Silent Time Lite) that silences the ringer right before my weekly classes and turns it on right after. If that’s not a huge mental load to you, lucky you – it’s just this sort of simple thing that my brain refuses to handle. But once I configure this app for the semester, I’m done! I don’t have to worry about my phone going off in class, or about missing phone calls or messages for the rest of the day. That’s just the kind of seamless usefulness I find compelling.

I think we are finally reaching a point where we can have many little computers and sensors in our environment that are able to augment our experience. Our cellphones are a great example but as mentioned above, they are quite the attention-seeking little divas. In the future, unique embedded products will be able to intelligently interact with us, without becoming a chore, or taking our attention away from real life. I’m hoping to search for such products and research and post some interesting stuff on this blog.