Hacking the Microsoft Kinect for the Real World

Kinect Technologies by Microsoft

Everyone’s heard of the Microsoft Kinect, the gaming technology which cause a huge stir when it launched in 2010. In fact, in 2011 the Kinect broke the Guinness World record for being the “fastest selling consumer electronics device”, selling 8 million units in its first 60 days.

The Kinect features color and depth cameras that can detect the user’s body and limb positions, without the need for a physical controller. The Kinect also contains an array of microphones which enable voice control in addition to gesture control. In other words, the Kinect can “see” and “hear” you, without your ever having to touch an interface; it’s a Natural User Interface or NUI.

Naturally, this means sports and dance games are very popular in the Kinect line-up, but there’s a whole other line-up that fewer people know about – innovative real world applications that take advantage of the hardware of the Kinect. It’s not that this technology has not been around, but until now it has not been as cheap, robust and accessible as it is now.

Hacker Bounty!

After Adafruit, an open source electronics advocate, offered a bounty for them, open-source drivers for the Kinect showed up mere days after the product release. Since then, enterprising hackers and startups have been coming up with their own non-gaming applications for the Kinect.

Tedesys makes viewing medical data easier during long surgeries

Tedesys, in Cantebria, Spain is developing an interface that helps surgeons during long procedures. Normally, to access necessary medical information during the procedure, the surgeon must leave the sterile environment to use a computer and then scrub back in to the OR. Tedesys’ interface allows doctors to navigate the medical information they need using gesture and voice control, without contaminating the sterile environment.

At the Royal Berkshire Benefit Hospital in the UK, doctors are using the Kinect to make rehabilitation therapy for stroke victims less frustrating. There are no complex controls to worry about and simple games make the rehabilitation exercises enjoyable. The system  improves their strength, control and mobility, as well as tracking their improvement over time.

At the Lakeside Center for Autism in Issaquah, Wash., staff are using the Kinect to help children with Autism work on skill-building, social interactions and motor planning. You can read more about these uses at Kinect Effect.

Microsoft’s Reaction

Until recently, it appeared Microsoft had decided mostly to look the other way as people used their hardware with the non-official hacked drivers. But earlier this year, in a surprising about face, Microsoft decided to jump back into the game by releasing a new version of their technology, called Kinect for Windows, specifically designed for PC applications, with a free SDK and drivers. They also decided to motivate and assist startups using their technology by providing 10 promising finalists with $20,000 to innovate with the Kinect.

Some new startups that were accelerated by this program are :

  • übi interactive can turn any surface into a touchscreen.
  • Ikkos Training aims to use theories of neuroplasticity to train athletes and improve their performance.
  • GestSure allows surgeons to navigate medical data in the OR, as described earlier.
  • Styku creates a virtual “smart” fitting room for retailers.

Frankly, I find these latest uses less exciting than I hoped, but perhaps now that Microsoft has made the Kinect a commercially viable interface, startups will be encouraged to bring their ideas to life. I really do believe that this technology is game-changing; Instead of all our interactions with technology being funneled through physical interfaces like keyboards and mice, we can use our full range of motion in intuitive and ergonomic ways. Once the idea of NUIs really starts to permeate the social consciousness, we will see many innovative uses of this technology.

“Making” The World a Better Place

Front page and thumbnail images for this article are examples of Torolf Sauermann’s 3D Art.

Recently, I heard an excellent talk about 3D printing and the future of personal fabrication from Prof. Jan Borchers from the Media Computing Group at RWTH Aachen University. Prof. Borchers is currently doing a sabbatical at the Distributed Cognition and HCI Lab at UCSD, which is where I happen to work during the academic year, giving me the awesome opportunity of hearing his ideas and thoughts on a variety of HCI-related subjects. One phenomenon he is super excited about is the Maker revolution and the future of “Users as Makers”, as he puts it. I’ll cover a little of it here and maybe some more in posts to follow, since there’s just so much good stuff.

3D Printing

3D Printer

3D Printer

So, you know how you can print out a digital JPG on your photo printer and have an actual copy? Well, 3D printers allow you to print out an actual 3D object from a digital file. This means you could be browsing the web for the perfect watch and when you find it, just download it and print it out!

Well, okay, this may not actually be possible right now, but there’s no reason why this can’t be true in the relatively near future. Currently, there are limitations on materials, color, strength and shape of the printed object, but many of these limitations are being overcome rapidly. Users are printing and sharing 3D art, artifacts, products and models online just like we shared images and music files a decade ago.

The Next Digital Revolution

A Maker Faire Poster

A Maker Faire Poster

Prof. Borchers sees this as the 3rd digital revolution, and here’s why. The first digital revolution was arguably the rise of the PC. Computation and processing of data – previously only accessible to large corporations – suddenly became cheap and accessible. Anybody could buy a computer and do their own word processing or basic data management.

The second digital revolution was the rise of the Internet. Communication was digitized and became fast, lossless and cheap. Bill in San Diego could digitize his favorite song, share it on Napster and Fareed in Dubai could download it. Prior to the internet revolution, Bill could tape his favorite song onto a cassette tape, stick it in an envelope and mail it to Fareed, but now all this is instantaneous. And instantaneous is magic.

“The key ingredient is digitization – when things become fast, lossless and cheap, that’s when revolution happens.” – Prof. Jan Borchers

In the coming revolution, we will be able to exchange and share things rather than media. 3D printing is not new, corporations have been using it to prototype their products for years. But when sub $500 3D printers become available, this technology will be in the hands of the regular consumer. In conjunction with PCB mills, laser cutters, and hobby electronics, what we have is essentially manufacturing technology for the masses. What will we make?

Making For the Rest of Us

Antenna Reflector at Jalalabad Fab Lab

Antenna Reflector at Jalalabad Fab Lab

Currently, there’s a lot of creative items, and small objects like gadget cases, coat hooks and the like on sites like Thingiverse.com. But the stuff that makes my heart beat faster are the things for people that are traditionally ignored by the corporations, like people with disabilities or those in lesser developed countries. Fab Labs, as conceptualized by Neil Gershenfeld, are bringing 3D printing and other technologies to communities all over the world.

Here‘s an example of a reflector antenna for long-range WiFi built for Jalalabad, Afghanistan. Prof. Borchers shared a story from India, where students used a fab lab to build a sensor that would allow dairy farmers to test the fat content of their milk. In Norway, Sami animal herders used a fab lab to build wireless trackers for their sheep.

The power shift that is happening, while somewhat under the radar right now, is set to change the manufacture and distribution of goods as we know it. No longer are corporations the ones to decide which products get made, and for whom. This is very exciting stuff!

In fact, we have two Fab Labs right here in San Diego, Fab Lab San Diego and Maker Place. So the question is, what will you create?

Move to the Music

This past weekend I had the awesome experience of seeing some HCI in the wild. I was at the Mingei Museum for one of their Early Evening events featuring Bostich and Fussible from Nortec Collective, and they were creating music using an unusual table. The table is a glowing blue surface with a collection of acrylic blocks with funky symbols placed on it. The artists create and change the music simply by moving and spinning the blocks in relation to each other. I was thrilled to recognize the table as the reacTable, a research project from the Music Technology Group at Universitat Pompeu Fabra, in Barcelona, that I’d read about in my Ubiquitous Computing class.

The Tangible Interface


reacTable in action

The table is a unique interface to a software based synthesizer for electronic music. Before the reacTable, the DJ using this synthesize had to use a mouse to control the music, while viewing the waveforms on screen. As you can imagine, creating complex music with the mouse interface is not very natural or intuitive. The Reactable allows more than one DJ to manipulate multiple tracks easily and naturally on a shared table.

The DJs associate tracks with a certain set of blocks that each have unique patterns on the base of the block. When those blocks are placed on the surface, a video camera beneath the surface can “see” the unique pattern, recognize the track and start playing it. The waveform is displayed on the glowing table to provide visual feedback to the musician. By moving it closer or further from the center or other blocks, the music can be transformed – sped up, slowed down, filtered, amplified. You can see a video of how this looks below.

Even if you have never DJ-ed before, you might be able to see how manipulating these blocks is a much more natural interface than using a mouse to move various sliders. The table itself is a joy to look at, adapting into the dark environment of a club without the harsh glare of a laptop screen. As a result, the reacTable has been quite successful; the first major event being Björk’s world tour in 2007, which premiered at the Coachella Music Festival that year. In 2010, Reactable released a award-winning mobile versions (without the blocks) for the iPhone and iPad and more recently, one for Android.

Musical Instruments as Interface

Musical instruments pose a unique problem for HCI. Think about it… We wouldn’t call a violin poorly designed, given the beautiful music it can create and its enduring nature. But at the same time, a user study with novices trying out a violin for the first time is bound to prove disastrous.
Will they find it easy and enjoyable to use? Probably not.
Will they achieve the goal of creating beautiful music? Almost certainly not.
So how do we qualitatively design and test for the unique experience of creating music? I find this to be a unique challenge in HCI.

We all agree that musical instruments are powerful and valuable interfaces, as evidenced by the variety we have preserved for so many generations. However, in the brand new world of digital everything, I’m interested to see if we are able to create similarly enduring interfaces that will stand the test of time as well as the violin has.


Breaking the Code for Girls

Percentages of Women getting STEM degrees

Percentages of Women getting STEM degrees

This post is a bit more on the personal side. As a woman in computer science, I’ve gotten used to being one of few women in the room. I often don’t even notice it anymore. But the truth is, I’ve never understood why more young women don’t pursue CS (Computer Science).

Far more women pursue mathematics than CS, as well as chemistry and biology etc. so it’s not the “Girls aren’t good at math” stereotype. The only STEM (Science, Technology, Engineering and Math) majors that fare worse than CS are Physics and the more traditional engineering disciplines – civil, materials, mechanical etc.

Maybe a lot of girls don’t see themselves on a construction site wearing a hard hat (and kudos to those who do!), but most middle and high schoolers are comfortable with computer use. So why do girls shy away from CS?

It turns out that many boys in middle school get into programming because of their engagement with video games. Since fewer girls get into gaming, and fewer games are aimed at girls, they don’t have this same natural transition. By the time they get to college, the CS classes are filled with boys who have been programming for several years, which can be very intimidating to both male and female newcomers to the field.

Change Is Possible

Celebration of Women in Computing - Socal

Celebration of Women in Computing - Socal

In recent years however, there has been a big push at engineering schools to increase the number of women pursuing CS at an undergraduate level. One school in particular, Harvey Mudd College, has more than tripled the percentage of women in CS to an astonishing 42 percent. I went to Pomona College, and took all my CS courses at Harvey Mudd from 1997 to 2001. The running joke at the time was that the ratio of men to women at Harvey Mudd was Pi to one (haha), but that was actually the ratio at the school, the ratio at the CS department was more like 10 to 1.

So how did they bring about such a huge change? In 2006, Harvey Mudd appointed a new president, Maria Klawe. The school was in the process of revamping their introductory computer science classes and together with the CS department, designed courses that took away some of the intimidation factor and improved support and community factors for the women. You can read more about it at the NYTimes.

Maria Klawe at UCSD

I recently saw Maria talk at UCSD and part of her talk focused around inspiring a sense of community and encouragement among women in CS. All too often, we’re so busy making sure we’re as good as the guys, if not better, that we forget to build a camaraderie with the women. We’re so busy proving to ourselves we’re not different from the guys, that we gloss over the ways that we are. I learned about something called the Impostor Syndrome, where a lot of women feel they are faking it, despite being ridiculously accomplished in their field. It’s not that only women have these issues, but that when women have them, they have fewer role models to look around at and think, “Well she’s doing it, so I can too!”

So if you’re in Computer Science, male or female, look around and see if you can’t encourage a young woman who’s got the talent to be a great engineer. Help them go to a Grace Hopper Celebration, a computing convention just for women. The truth is, Computer Science is a great, well-paid field, that is applicable to almost any aspect of the world you can think of. We just need to spread the word that it’s not a twitchy, anti-social boys’ club, but a fun, exciting and varied career option for women as well.



The best thing about Zebranet is the fact that it’s actually a network solution designed for zebras! Why do zebras need a network, you ask? Well, the network is actually for scientific researchers to study the migration patterns of wild zebras as well as their daily, social behavior.

Zebras by Chris Willis

Personally, I also love two specific aspects about this work, the first being that the researchers had to design for and work with real zebras in the wild and the second, that they got to travel to Kenya to do it.

The Design Challenge

The ZebraNet proejct from Princeton University had two major parts, designing a collar that zebras could wear that would collect information about their movement patterns and designing a peer-to-peer network protocol that would allow a large fraction of the data to return back to the researchers even if many of the zebras are out of the range of the receivers.

In addition, the receivers (or base stations) are not standing devices like cellular base stations. Instead, the researchers drive vehicles around the savanna to collect the data, hoping to get in range of a few zebras.

How it works

ZebraNet Project

ZebraNet Project

So how does this work? Essentially, the collars collect data on the movements of the zebras using GPS. They the forward this data to other zebra collars that have historically been successful at transferring data to the base station. Perhaps these zebras are the bravest of the bunch and venture furthest from the pack, zebras with a mind of their own. This protocol allows the data from the more conservative zebras to reach the researchers despite being out of range of the base station.

Another key factor is the requirement of a very long lifetime of the collars. Zebras can’t be counted on to charge the collars every night, so the collars have to be able to store all the required data and work for several months or more without intervention. To solve this issue, the collars recharge themselves using a solar array and then use that energy very efficiently. For more details, you can check out the NSF page, or this excellent article on the BBC.

Survival in the Wild

When I first read this paper, it was right on the heels of the big San Diego blackout, and I realized how useful it would be for cellphones to function in this way when power systems are interrupted. While many users lose all form of communication, some may have backup power and internet at their workplaces, or even a cellphone connection. Emergency text messages could relay through nearby users until they found a way to an actual base station, internet connection, or even the intended recipient. The message could include GPS tracking information if needed as well.

In emergency situations, this could be very useful at getting a message out to family saying you are okay or alternatively, that you need help. With the use of newer technologies like Bluetooth Low Energy, this could be done at very low power consumption, allowing your phone to work in emergency mode for a long time.

What do you think? Shouldn’t our phones be better equipped to help us during emergencies? What other applications could be helpful in emergency situations?


PhotoXplore Helps You Shoot Manual

Explanation of shutter speed

Typical explanation of shutter speed

This quarter I took a class in Information Visualization, a topic I introduced in this post. I was particularly interested in ways to visualize information related with photography. One obvious component of photography that we don’t get to see often is how the photographer got the shot i.e. the settings he used.

While many sites, such as Flickr, let you view the EXIF information in the photos, there’s no easy way to view them for groups of photos, or get a sense of what settings people use for a certain genre of photography.

For example, say you’re at the Eiffel Tower and want a beautiful night shot and you’ve got your brand new DSLR out. But despite your best efforts at plowing through explanations of shutter speed and aperture, the crazy fractions and notation continue to defy you. That’s where my project, PhotoXplore, fits in.

Shimona's Photos on PhotoXplore

Shimona's Photos on PhotoXplore

Basically, the idea is that by visualizing the images in a simple interactive chart, you can make connections and begin to understand the relationships without needing to understand the numbers… for now at least. You can explore the chart by highlighting areas (called brushing) and watching the gallery area change. Vice versa, if you see an image you like, you can mouse over it and see where it pops up in the chart area.

The quarter system is way too short for a full project, so I wasn’t able to do any user studies, but I did notice some fun things while using it myself. For one, by plotting different photographer’s work, you can see what settings they like to use. Apparently, I like to stick to wide apertures and handheld photos – knowing this motivates me to branch out a bit.

Photos of Northern Lights

Photos of Northern Lights

I discovered another cool thing while exploring the Iceland Landscape photo set. There was an interesting clump of photos in the one area of the chart that was somewhat separate from the rest of the photos. On brushing over them, the gallery immediately repopulated with photos of the Northern Lights! It was immediately clear that to shoot the Northern Lights you need a wide aperture and a very long exposure. This little discovery captures the concept behind the PhotoXplore interface – it aims to provide a fun way to explore photos where the images and the settings are presented together making such discoveries easy and intuitive.

Quick disclaimer, all images are off Flickr using their API and credit goes to the original photographer. If you click on an image in the gallery, you can link to the original page on Flickr. Also, it’s not complete yet, I’d really like to let users come up with their own searches and save them, but for now its pre-populated with some sets of data. If you’d like a new set of images added, I can generate it and add to the list. Check it out and let me know what you think!