“Making” The World a Better Place

Front page and thumbnail images for this article are examples of Torolf Sauermann’s 3D Art.

Recently, I heard an excellent talk about 3D printing and the future of personal fabrication from Prof. Jan Borchers from the Media Computing Group at RWTH Aachen University. Prof. Borchers is currently doing a sabbatical at the Distributed Cognition and HCI Lab at UCSD, which is where I happen to work during the academic year, giving me the awesome opportunity of hearing his ideas and thoughts on a variety of HCI-related subjects. One phenomenon he is super excited about is the Maker revolution and the future of “Users as Makers”, as he puts it. I’ll cover a little of it here and maybe some more in posts to follow, since there’s just so much good stuff.

3D Printing

3D Printer

3D Printer

So, you know how you can print out a digital JPG on your photo printer and have an actual copy? Well, 3D printers allow you to print out an actual 3D object from a digital file. This means you could be browsing the web for the perfect watch and when you find it, just download it and print it out!

Well, okay, this may not actually be possible right now, but there’s no reason why this can’t be true in the relatively near future. Currently, there are limitations on materials, color, strength and shape of the printed object, but many of these limitations are being overcome rapidly. Users are printing and sharing 3D art, artifacts, products and models online just like we shared images and music files a decade ago.

The Next Digital Revolution

A Maker Faire Poster

A Maker Faire Poster

Prof. Borchers sees this as the 3rd digital revolution, and here’s why. The first digital revolution was arguably the rise of the PC. Computation and processing of data – previously only accessible to large corporations – suddenly became cheap and accessible. Anybody could buy a computer and do their own word processing or basic data management.

The second digital revolution was the rise of the Internet. Communication was digitized and became fast, lossless and cheap. Bill in San Diego could digitize his favorite song, share it on Napster and Fareed in Dubai could download it. Prior to the internet revolution, Bill could tape his favorite song onto a cassette tape, stick it in an envelope and mail it to Fareed, but now all this is instantaneous. And instantaneous is magic.

“The key ingredient is digitization – when things become fast, lossless and cheap, that’s when revolution happens.” – Prof. Jan Borchers

In the coming revolution, we will be able to exchange and share things rather than media. 3D printing is not new, corporations have been using it to prototype their products for years. But when sub $500 3D printers become available, this technology will be in the hands of the regular consumer. In conjunction with PCB mills, laser cutters, and hobby electronics, what we have is essentially manufacturing technology for the masses. What will we make?

Making For the Rest of Us

Antenna Reflector at Jalalabad Fab Lab

Antenna Reflector at Jalalabad Fab Lab

Currently, there’s a lot of creative items, and small objects like gadget cases, coat hooks and the like on sites like Thingiverse.com. But the stuff that makes my heart beat faster are the things for people that are traditionally ignored by the corporations, like people with disabilities or those in lesser developed countries. Fab Labs, as conceptualized by Neil Gershenfeld, are bringing 3D printing and other technologies to communities all over the world.

Here‘s an example of a reflector antenna for long-range WiFi built for Jalalabad, Afghanistan. Prof. Borchers shared a story from India, where students used a fab lab to build a sensor that would allow dairy farmers to test the fat content of their milk. In Norway, Sami animal herders used a fab lab to build wireless trackers for their sheep.

The power shift that is happening, while somewhat under the radar right now, is set to change the manufacture and distribution of goods as we know it. No longer are corporations the ones to decide which products get made, and for whom. This is very exciting stuff!

In fact, we have two Fab Labs right here in San Diego, Fab Lab San Diego and Maker Place. So the question is, what will you create?

Move to the Music

This past weekend I had the awesome experience of seeing some HCI in the wild. I was at the Mingei Museum for one of their Early Evening events featuring Bostich and Fussible from Nortec Collective, and they were creating music using an unusual table. The table is a glowing blue surface with a collection of acrylic blocks with funky symbols placed on it. The artists create and change the music simply by moving and spinning the blocks in relation to each other. I was thrilled to recognize the table as the reacTable, a research project from the Music Technology Group at Universitat Pompeu Fabra, in Barcelona, that I’d read about in my Ubiquitous Computing class.

The Tangible Interface

Reactable

reacTable in action

The table is a unique interface to a software based synthesizer for electronic music. Before the reacTable, the DJ using this synthesize had to use a mouse to control the music, while viewing the waveforms on screen. As you can imagine, creating complex music with the mouse interface is not very natural or intuitive. The Reactable allows more than one DJ to manipulate multiple tracks easily and naturally on a shared table.

The DJs associate tracks with a certain set of blocks that each have unique patterns on the base of the block. When those blocks are placed on the surface, a video camera beneath the surface can “see” the unique pattern, recognize the track and start playing it. The waveform is displayed on the glowing table to provide visual feedback to the musician. By moving it closer or further from the center or other blocks, the music can be transformed – sped up, slowed down, filtered, amplified. You can see a video of how this looks below.

Even if you have never DJ-ed before, you might be able to see how manipulating these blocks is a much more natural interface than using a mouse to move various sliders. The table itself is a joy to look at, adapting into the dark environment of a club without the harsh glare of a laptop screen. As a result, the reacTable has been quite successful; the first major event being Björk’s world tour in 2007, which premiered at the Coachella Music Festival that year. In 2010, Reactable released a award-winning mobile versions (without the blocks) for the iPhone and iPad and more recently, one for Android.

Musical Instruments as Interface

Musical instruments pose a unique problem for HCI. Think about it… We wouldn’t call a violin poorly designed, given the beautiful music it can create and its enduring nature. But at the same time, a user study with novices trying out a violin for the first time is bound to prove disastrous.
Will they find it easy and enjoyable to use? Probably not.
Will they achieve the goal of creating beautiful music? Almost certainly not.
So how do we qualitatively design and test for the unique experience of creating music? I find this to be a unique challenge in HCI.

We all agree that musical instruments are powerful and valuable interfaces, as evidenced by the variety we have preserved for so many generations. However, in the brand new world of digital everything, I’m interested to see if we are able to create similarly enduring interfaces that will stand the test of time as well as the violin has.

 

RidgePad – Touch Detection with Smarts

The second creative idea attacking screen size limitations comes from Christian Holz and Patrick Baudisch at the Hasso Plattner Institute in Berlin. They contend that the problem is not our fat fingers, but the lack of intelligence used to detect the target of the touch. Think of it this way, sometimes you click a button with the pad of your finger, sometimes with the tip. Every time you do either, you are likely off the target by a certain amount. But currently, touchscreens have no way of telling which part of your finger you used and don’t attempt to compensate for how far off you might be. They just use the center of the area of contact, which is quite inaccurate.

Generalized Perceived Input Model

Generalized Perceived Input Model by Holz and Baudisch

Holz and Baudisch ran some experiments showing that users typically hit a variety of different points when trying to touch a target as shown in the image labeled (a). As you can see, the center of the areas they touched (solid ovals) were not on target but tended to be offset from the target. On the other hand, the dashed oval shows that the offsets are similar and could be intelligently compensated for.

How do we do this? First, they propose using a Generalized Perceived Input Model to compensate for the offset. Then, using a touchscreen with the sensitivity of a fingerprint detector, we can detect the pose of the finger using the print as shown in (b). With that information, we can make more intelligent decisions about what the target of the touch was.

The more accurately we can detect touches, the smaller we can make our screens while still being usable. Using the new model and fingerprint detection, Holz and Baudisch can obtain 1.8 times higher accuracy than capacitive sensing. In addition, it enables new interactions that can detect new gestures such as rolling and pointing. This is one more step toward smaller yet more intuitive touchscreens and fewer missed buttons.

Fat Fingers and Tiny Buttons

Gordon Gecko on huge mobile phone

Gecko Mobile

Have you noticed that the phone you use now is larger than the phones we used in the last decade? That is, unless you were Gordon-Gecko-rich back in the 80’s, and could afford this fine phone you see him holding here. The truth is that there is a tension between how small we want our mobile devices to be, and how fat our fingers continue to be, despite our very best technological innovations.

Patrick Baudisch

I recently went to see a talk by Patrick Baudisch, an HCI researcher at Hasso Plattner Institute in Berlin. Some of the research he presented focused on reducing the minimum size that our screens need to be for effective use. One of the most challenging aspects of small screens is the tiny buttons. Any buttons on screen need to be large enough to be accurately pressed by a human finger. So how do we reduce that size limitation without exponentially increasing the frustration-factor?
Patrick Baudisch described two creative ideas for breaking down this barrier. I’ll cover one – nano touch, in this post, and the other RidgePad, in an upcoming post.

nano touch

First off, it’s hard to press a button you can’t see, therefore having your finger on top of the screen obscuring the button you are trying to press is a problem. How about placing a touchpad behind the device allowing you to maneuver a pointer on screen without obscuring it? The technology is called nano touch and allows users to complete a variety of actions on very small screens (2.4″ to 0.3″). Here is one video from New Scientist, more are available on the nano touch website.

Why, For The Love Of All That Is Sacred?

You might be asking, why on earth wouldn’t I just play my FPS on my 40 inch plasma at home? Do I really need to play an FPS while driving home from my work computer to my home gaming setup? And I would respond by tagging that question with #firstworldproblems. Patrick points out that a large part of the developing world do not have access to laptops and computers and more simply, large screens. Mobile phones are their point of access to technology.

“There is a single true computation platform for the masses today. It is not the PC and not One Laptop Per Child. It is the mobile phone—by orders of magnitude. This is the exciting and promising reality we need to design for.” – Patrick Baudisch

I saw this myself on a recent trip to India. I was marveling at how young people in India seemed very obsessed with their phones. And not just like teenagers we see here, lost to the world and one with their phone, but instead, groups of young people gathered around one phone. Note, these are not smartphones, they can play music, and maybe download ringtones, but that’s about it.

Indians on cellphones

Photo by meanestindian

So why the excitement, I thought? What could possibly be so engrossing in a cellphone? And then it hit me. For most of these people, this is the first computer they have personally owned. I remember the first computer I owned, I don’t think it could even play much music, just PC beeps with exciting siren effects if you coded the BASIC right. But damn, was it exciting! And that’s why small screens are important, because they bring the magic of computing to a whole segment of society that was mostly skipped over in the PC revolution.

Thanks for reading! Back soon with RidgePad

The Invisible Jogger

Photo by dafydd359

Today I had an interesting conversation with An Yu, a cognitive science major at UCSD. We were discussing the potential applications for understanding synchrony. Another graduate student we know has done some work on researching people’s ability to synchronize with each other. They measured how well they could keep a simple beat, even when the beat went away, and we were wondering what this could be applied to.

An said she’d noticed that when people walked near each other, you could tell in subtle ways whether they were friends or not based on how they synchronized with each other. I’ve noticed it too. In fact, I’ve found if you walk too closely synchronized with other people, they start giving you funny looks. You’re expected to walk faster or slower than them if you don’t know them, it’s weird if you walk at the same speed right next to or behind them.

This whole conversation reminded me of a very interesting idea called Jogging Over a Distance; a research project of Floyd Mueller, Shannon O’Brien and Alex Thorogood. The idea is to give runners living in different cities the feeling of running with a friend through spatial awareness. The runners wear headsets connected to each other over a mobile connection. They can talk to each other and the system processes the audio to give the runners a sense of running in front of and behind their friend. GPS is used to calculate their relative speeds. Basically it gives casual runners the sense of social interaction and pace which can be very motivating and enjoyable.

In fact, your simulated pace can be varied so that you could run with someone who is actually faster than you and still feel like you are running with them – this is similar to a handicap in some sports. With this feature, not only could you run with a friend in another city, but even one with whom you would normally not be a compatible running partner – someone much faster or slower.

I’m not a runner, but I find this idea quite fascinating. I wonder especially if the experience is intuitive for runners or if they will constantly be struggling to synchronize with their invisible jogging buddy. Also, what happens when one needs to stop at an intersection? Does your buddy need to stop as well? Or can you hit the catch up button? I’d probably be the one to hit the catch up button!