Ten years ago, Steven Spielberg released the science fiction blockbuster Minority Report. The film featured a futuristic computer system with a user interface controlled by gestures. Tom Cruise swept aside data, searching through video and zooming into photos ? simply by implementing a series of hand motions.
Today, this technology is a reality. Oblong Industries, a tech startup based in Los Angeles, recently released its G-Speak platform, a user-interface that closely resembles the futuristic system demonstrated in Minority Report. The similarities between the Minority Report system and the G-speak platform are no coincidence as Oblong?s co-founder, John Underkoffler, was the scientific consultant and developer who created the user interface for the film.

Photo by Steve Jurvetson: John Underkoffler explains the human-computer interface he first designed as part of the advisory work for the film Minority Report.
Gesture-controlled user interfaces aren?t exactly brand new. In fact, the most popular gesture-controlled UI currently resides in 18 million living rooms across the world. Microsoft?s Kinect component for its Xbox gaming system provides full-body 3-D motion capture, facial recognition and voice recognition capabilities. Its applications, however, are limited to gaming and entertainment.
New revolutions in technology and mathematics have introduced the possibility of bringing touchless UI to every computer and device. This new wave of human interface is what many believe will be next big transformation in modern technology.
Kwindla Kramer, CEO and co-founder of Oblong Industries, is one such believer. In fact, leading the evolution of human-machine interface is the backbone of his company?s mission. The idea is to bend technology to suit our natural needs, instead of bending our needs to suit technology.
?These new UIs give us more natural ways to interact with the computers in our lives. Pointing at screens to move content around, flying through 3-D spaces with just hand gestures, and using a television without ever picking up a remote control are all new ways of using, and thinking about, digital experiences,? explained Kramer.

Courtesy Photo: Kwindla Kramer, CEO of Oblong Industries believes that turning computing from a single-screen, single-device activity to a multi-screen, multi-user, multi-device way of creating, communicating and collaborating is the next big technology frontier.
The examples Kramer provides aren?t hypothetical. G-speak technology allows users to navigate and control data by pointing and gesturing at a screen. Users can also ?fly? through 3D spaces by manipulating angles of perspective. The Israeli company, Prime Sense ? which supplies the sensors for Microsoft Kinect? released a new product for Smart TVs earlier this year. The application, called reach UX, features a gesture-controlled interface that allows users to select, browse and control movies and entertainment programs through a series of simple gestures such as sweeping, pulling and releasing.
Leap Motion is another product that is emerging into the field of gesture-controlled interfaces. The Leap is an iPod-sized USB peripheral that creates an interactive 3D interaction space of 8 cubic feet around your computer. The sensor is scheduled to be released in early 2013 and can be used to control any PC or Mac through hand motions. It is designed to be compatible with a variety of applications and can be used in lieu of other interface apparatuses such as the keyboard and mouse.
Like G-speak, the Leap sensor is designed to control computers with natural hand and finger movements. It can differentiate between individual fingers and can even identify a stylus or pencil as an object separate from the user?s body. In order to track these nuanced movements, the Leap team used a patented mathematical approach that allows the Leap to sense movements within .001 millimeter of accuracy.
?We believe everyone will benefit from the Leap,? said Leap Motion CEO Michael Buckwald. ?The intuitive nature of Leap?s motion control will enhance all computing activities across gaming, productivity, education, and much more. At the same time, Leap will allow users to perform more traditionally complex tasks, such as photo manipulation or 3-D modeling, with greater control and ease.?

Courtesy Photo: Michael Buckwald, CEO of Leap Motion, claims that the ability to control any computer with nuanced hand and finger movements will fundamentally transform the way people interact with computers.
According to Buckwald, the Leap sensor is 200 times more accurate than any other technology on the market, and it works for tasks that make up the vast majority of our interactions with a computer. Before its release to the public, thousands of developers will have the chance to interact with this new technology for free. For other consumers, the price to take home the Leap is $70.
Like Oblong Industries, the Leap Motion team has identified the evolution of touchless human-machine UI as the next great disruptive trend in technology. Buckwald sees practically limitless capabilities for the new technology.
?We envision a day in the near future when our motion control technology will be used in most consumer products ? not just computers, but cars, appliances, medical devices, smartphones, tablets and more. We?re also excited by the ideas we?ve seen from developers applying for our developer units,? Buckwald said.
Kramer offers a more pragmatic view of these early, revolutionary technologies.
?Both of these sensors are still relatively low-resolution and have fairly small sensing volumes,? Kramer said of Leap Motion and PrimeSense. ?They?re great, and they provide lots of new opportunities for product development and for research. But in terms of the full trajectory of these technologies, we?re in the early days.
?By 2015 we?ll see spatial and gestural interfaces as a standard feature on computers (including tablets and phones),? predicts Kramer. ?The mouse won?t go away. There are too many programs in use that were designed around the mouse. And we?ll still use touch screens as well. But more and more of our computing will be spatial and gestural.?
While G-Speak, Leap Motion, and PrimseSense represent steps on the path to creating a fully interactive digital world, technology still has a long way to go before a fully interactive network of spatial and gestural interaction reaches its potential.
?Big technology transitions happen when cost, capabilities, and good software and product design all line up,? explained Kramer. ?Graphical computing was incubated in research labs for 15 years before Apple kick-started the shift from command-line computing to graphical UIs in 1984 with the introduction of the Macintosh.?
Source: http://www.cmn.com/2012/12/could-the-mouse-be-obsolete-by-2015/
hana taylor momsen xbox live update joan rivers gary carter dies oolong tea survivor one world
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.