From Keyboard-Driven UI to Google Glass: What’s the Next Metaphor?

2013-04-29 knowledge nav
Standard

On the eve of the release of Google Glass we’re continuing to hear tech pundits go on about how nobody except wanna-be hipsters are going to wear Google Glass. Just for the record, many of these folks are the same ones who went on record in 2007 saying that no one is going to want to listen to music on their cell phones. Oops. Then there’s all the noise about Apple’s unannounced iWatch with hours of podcast-time spent speculating on what the interface will look like and what revolutionary thing Apple will have to do to have another hit on their hands. Ugh. What we’re experiencing is the confusion and slight panic as we continue to transition away from the desktop-metaphor toward … well, toward something else.

I used Google Glass – Joshua Topolski/The Verge

The touch-revolution that exploded with iOS and then Android broke us free from the constraints of the mouse/keyboard/menu user interface, but what the next metaphor is going to be is yet not completely clear. While everyone focuses on whether we will continue to swipe our devices or wave at them mid-air or talk to them, the real danger is whether I am now just taking my desktop-interface and trying to do all the swiping and talking to it without the benefit of a mouse and keyboard? Clearly that won’t really work. Again, what’s the metaphor?

Having experimented with “portable computing” since the mid-80s Kaypro/TRS-100 days I observed that the challenges that we need to overcome were based on the technology limitations in visual feedback (monitor sizes and screen resolution) and user input (mice, keyboard, voice-recognition and gesture-based user-interface). Oh, and then there were the battery/power limitations needed to run these screens and devices. In those early days of 20-pound transportable computing that meant a command-line-interface, a 10-inch green-screen CRT attached to a full-sized IBM-style keyboard and a large sew-machine-sized metal case with a really long extension-cord plugged into the wall. The metaphor was that your computer was an electronic typewriter that we typed our work into with a keyboard, then sent our work to a printer to create a “real world” copy. Period. Then Apple’s Macintosh, with its bit-mapped CRT screen, mouse and keyboard, stole the desktop metaphor from Xerox PARC. So we added to the desktop metaphor taking our electronic paper, sticking it in our electronic typewriter, writing the paper and then saving it in an electronic filing cabinet. Interestingly Microsoft successfully stole the interface/metaphor from Apple and dominated what was then called micro-computing, but then stumbled when they tried to shoehorn the desktop metaphor into their tablet-class devices in the late 1990s. Microsoft’s error wasn’t just that the devices were underpowered and overly expensive, but that the metaphor didn’t work.

Apple’s brilliance with the iPhone and then the iPad was to not port its desktop metaphor to the tablet but to remove all of the unnecessary layers between the user and the device. So instead of manipulating a pointing device on a literal table to move the cursor on the screen, why not point directly on the screen with your finger to do the same thing. And instead of always having a menu-bar on the top of the screen and a keyboard at the bottom, only show me those items that I need to accomplish the task I’m performing. It might not seem like much but there’s a different experience composing on an iPad without having a keyboard stuck in between oneself and ones works, versus using even a MacBook Air. When I first started using computing devices to write in the 1980s I could only dream of the day when I could take a small 10-inch device to the coffee shop and compose to my heart’s content for hours and hours.

So the limitations remain to be how big does the screen need to be to be useful to me and also, if I’m doing input, am I using a virtual keyboard and touch or some other method such as mid-air swipe or voice? When I first saw the Google Glass announcement last year I wondered if Google had suddenly made Apple’s pursuit of retina displays and Samsung’s “bigger is better” approach moot. Why compress more pixels onto a screen if you can do the same thing with a virtual heads-up device? This would certainly kill the fab-let devices because who wants to carry around a bigger device when you can do the same thing virtually? Truth be told, Google Glass isn’t trying to package a high-resolution desktop screen via it’s heads-up display, but it certainly points to the potential, if one were able to project such a full interface into the user’s eyes instead of using a slab of plastic and circuitry. As for the size limitation based on how small is too small for one to use a virtual keyboard, that becomes moot to whomever cracks out-of-the-box voice-recognition. Voice dictation built into iOS and Google Voice are gaining in usefulness day-by-day. Resistance to wear Google Glass on our heads and talk to our technology takes us back to the problem of what the next user-interface metaphor is going to be.

PDA – Personal Digital Assistant – When PDAs appeared in the late 1990s they were often little more than glorified calculators with rudimentary contacts and calendar thrown in, while the best of them functioned as digital Franklin Day Planners. But perhaps the next real metaphor will be to have your wearable technology function as a virtual personal assistant to whom you talk to and interact with. This would be generations beyond simple voice recognition (which isn’t simple at all), and approaching the Intelligent Agents hinted at in the Apple Knowledge Navigator video of the late 1980s. While the video stumbles over its desktop heritage with its trash can icon, printing function and saving files to external storage, what we’re really looking at is technology with personality that can interact with humans and understand inexact human speech patterns. Apple’s Siri and Google Voice are rudimentary first steps in this direction. We’re obviously not there yet, but technology that removes the mechanical layers between us and the tasks we’re trying to accomplish points to the direction of the next technological metaphor: the virtual personal assistant.

Pushing the question even further, I don’t know that I’m looking forward to a future where we’re all talking to our devices. What if we could the need to talk to our devices and communicate with them without speaking out loud. The following is an old Geek Brief video during which neural interfaces were explored with a company called Emotiv.

GBTV #411 | GeekBrief.TV – HID – emotiv.com

It’s difficult to say how many years or decades we may be away from this technology being mass-produced, but the implications of breaking down the layers between ourselves and our technology point to huge changes in how we will interact with our technology. Inventor, futurist and author of The Singularity is Near, Ray Kurzweil just posted on his blog, “Brain-computer interfaces inch closer to mainstream, raising questions.” Instead of surrounding myself with four computer screens, quietly typing on a keyboard at my special sit/stand desk, maybe my future self will sit in some comfortable hideaway with a big smile on my face, eyes closed and arms folded while I silently compose my thoughts onto a device in my pocket with no obvious devices hanging about my person beyond my unremarkable-looking reading glasses. That would be amazing.

This next video is the one that inspired my future thinking. It’s from a TED talk by John Underkoffler who was early on the scene exploring gesture-based user-interfaces. Back in the day this required several high-quality cameras, special gloves and standing in an exact spot between the array of cameras.

John Underkoffler points to the future of UI – TED Video

It may seem impractical to have run all of these experiments. I mean, why fix the keyboard/mouse interface if it’s not broken? I think these explorations into possible future interfaces points to the fact that what we’re trying to accomplish is not “sit at a desk/type at a keyboard” data input. What we’re trying to accomplish is making sense of the data that’s already there and manipulating it in a way that wouldn’t work with keyboard driven-input. Voice-control and gesture-based interfaces are intermediary steps. But the point actually isn’t about the interface, but about what we’re trying to accomplish. If it’s just raw data input with no higher function than why move away from the keyboard/mouse/big screen? But because we want to remove the layers and just do the thing, we are heading toward thought-driven interfaces that call upon higher functions beyond calculations and uncurated data.

Happy Holy Friday: What Non-Believers May Be Missing

Crowd on Oktoberfest in Bavaria/Microsoft Clipart/iStockphoto
Standard

I’ve had wonderful conversations with my girlfriend, Tricia, over the past few weeks as she’s queried me about my faith status saying that after a year she still doesn’t really know what I believe in. Also, for me, this week has historically been significant as a time when I’ve reflected on my faith and more than a few times found myself on my knees looking for forgiveness or understanding. I’ve always been … religiously sensitive. The church and God were just part of my understanding of the world from my earliest memories. Like the days of the week and Sunday being the beginning of the week, it’s just the way the world was. Was…

I used to look at my life as being divided into three segments of 15-years: 15-years of my youth, 15-years as a believer and 15-years in self-imposed exile from my faith. I guess that leaves the last seven years as an extremely compressed version of the previous three segments with a real WTF quality to it. In that short period I went from my exile status to diving back into reading my bible, to looking for a fellowship, to leading worship (both in small gatherings and in larger Sunday services) and then back to exile. I learned a lot, but in the end I felt like I had gotten it wrong, in that I wanted the faith of the second 15-years, but it just didn’t work. So, back into exile I went.

Continue reading

The Story of the OLPC: Kids Are the Mission Not A Market

2008-my-olpc
Standard

At CES 2012 this past January the One Laptop Per Child foundation unveiled their newest model called the OLPC XO 3.0 tablet. The model shown seemed to have gained some weight and was much more boxy than the prototype hyped by OLPC founder Nicholas Negroponte in 2010. (see videos at the bottom of the page for CES 2012 coverage and the 2010 announcement). The OLPC is near and dear to my heart because I was there at ISTE in 2006 when Negroponte showed off the first OLPC and then got my own OLPC as part of a charity buy-one/get-one program in 2008.

The following video, from TED 2007, highlights some very important aspects of the One Laptop Per Child program that tends to get completely missed by competing programs and tech journalists. It used to drive me nuts when John C. Dvorak or Lance Ulanoff (formerly from PC Magazine) would go off on how it’s not a real computer or what the hell are third world kids going to do with a computer. Even some supporters speculated that this could be used by third world farmers to better market their crops, or some such foolishness. Argh!

Continue reading

How Important Is Music Education? TEDxBoston – Benjamin Zander and the YOA [video]

jbbnbzander
Standard

I’ve been fortunate enough to have had many music educators as my students and they tend to respond very strongly to the book we read in my class, The Art of Possibility by Conductor Benjamin Zander and Psychologist Rosamund Stone Zander. The following video was brought to my attention via one of my students’ blog posts and speaks very specifically to the importance of music as a means to connect across time and across cultures. How important is Music Education? It cannot be measured and those who attempt to put a number to it show how little they understand about what music represents to our culture and being human. Instead of thinking about what we need to cut, we need to think about ways to support those who will write and perform the songs of the future.

Continue reading

Eric Whitacre: A virtual choir 2,000 voices strong [TED talk]

2011-11-21-sleep
Standard

At TEDx Orlando 2011 we were shown the following video/TED talk by Eric Whitacre. I’ve been working in online education for over three years and earned my master’s degree and worked on a doctorate online, I know how powerful the connections can be. Far from being a weak substitute for “being there,” there is a powerful “being there” that we apparently take for grant when together that is all the more precious when our only connection is via youtube video and scrolling text. As Whitacre hints at in his TED talk, we make it work. The beauty of these thousands of voices, joined in spirit though spread across the world speaks to the power we have to connect and sing with thunderous passions and careful dignity. Enjoy.

Continue reading