Augmented Vision will be available in 2015.
Am I prophetic? Delusional? Or merely guessing? How does an deluded prophetic hand-waving guess sound?
The reality is the development of complex systems like Augmented Vision will take something more than putting the various technologies together. AV will require a change in the zeitgeist similar to the iPhone. But that’s not what I’m here to talk about.
While the magic moment–the tipping point–will take some unknown trigger. The technologies will have to be available to support it. The iPhone couldn’t have existed five years ago, just like AV can’t exist right now.
The first thing to ask is, what is Augmented Vision? I will attempt to define the term, but others may differ with it. That is okay, as I am only trying to place a target in space to draw an arrow to (or in this case, many arrows).
Definition of Augmented Vision: an unobtrusive self-contained human based system that creates an augmented reality experience allowing the user to interact with any object in the populated world. Let’s break that down into its pieces.
1 – Unobtrusive self-contained: the ability for the devices to be fashionable, easy-to-wear and comfortable.
2 – Human oriented: centered around the everyday human experience.
3 – Creates an AR experience: the cloud is a mature system overflowing with content.
4 – Interact with any object in the populated world: in our modern surroundings, anything can be identified, located and learned from.
I’m not speaking of an AV experience that makes reality and virtual difficult to decipher like Denno Coil. I’m thinking of AV as a tool to enhance the everyday living experiece just as the way other technologies have like the iPhone.
Computational Power – I’m using the specification difference from the iPhone 1.0 to 3.0 and considering Moore’s Law to generate a linear projection. I think 4.8GHz would be plenty of processing power to perform most operations, and Rouli pointed out to me that hard-core algorithms will be computed in the cloud like SREngine and Alcatel Lucent’s initiative. So computational power won’t be the limiting factor.
Vision Systems – Not much to go on here except the release of the AV920 from Vuzix in the fall. Looking at the cellphone development cycle from the last ten years, I’m thinking lightweight, fashionable and comfortable AV glasses will be available to the masses in five years.
Control systems – Talk, type, touch and think. Typing and touch are the current system. Think is too far off to be realistic for a human based system. This leaves talking and “air-touch” as the probably control systems. Much of the technology is already known, so control systems won’t limit AV.
Software – This is the biggest unknown. What makes up the bag of tricks required to make an AV platform similar to an iPhone? Object recognition, outdoor markerless, perfect occlusion, non-rigid surfaces, optimizing frames-per-second, geolayers, etc. I placed some items on the progression, but its hard to say which ones will be needed and how they fit together. The tools eventually needed will depend on the creativity of the manufacturer. I could speculate further but this is a question better answered by someone else.
While this post is mostly speculation based on the available research information and the limited commercial products on the market, I think it is a useful exercise to see the direction the technology is headed. Looking at the components required and seeing the gaps in development, an entrepreneur might use the opportunity to fill in the gap with the right product.
Once created, the basic Augmented Vision will be more like Terminator Vision, but it will create a platform to launch from. Will it grow until it reaches Augmented Vision that blurs reality and the virtual like Denno Coil or the Digital Sea? I might not find out in my lifetime, but until then, I’d be happy with an AV that gives me good hands-free directions to the nearest pub for a pint of Guinness.