Info

Vision Thinking

skate to where the puck is going to be

In the mid-90′s only a handful of people had an opinion on web-services, cloud computing or responsive realtime content. To make the mental leap from the early Tim Berners-Lee’s tool designed for helping physicists answer tough questions about the Universe, over to an increasingly dominant way we comunicate, shop, play, consume entertainment and learn required an uncommon dose of insight and instinct.

Anyone working within the field of computer vision right now will recognise that we too are at a very similar stage to where the web was some 20 or so years ago. Being asked how big the market is, is like being asked two decades ago how much will the web change the way we experience the world.

Gigantic technology companies and smart young startups are tackling hard problems, industry standards are emerging, the technology has moved from the preserve of scientific conferences into the public domain, and the creation of experiences that have required a high consumption of very specialised technical resource are becoming easier to create and deploy.

This is what makes working within computer vision field so exciting. It is rare to be working in a new medium at its mainstream inflection point. We are beginning to see the impact it is having on mobile marketing and gaming, and is soon to have within education, entertainment, retail and the realtime mobile web.

Content is transitioning from being newly responsive to screen-size, into becoming responsive to the physical environment it inhabits, and we are the ones working on building the backbone services and platforms, pushing forward the standards and pioneering delightful and useful user experiences.

Just as people like Jeff Bezos were the first to skate where the puck was going in the early days of the web (and continue to do so), we think we are doing the same with where mobile, web and 3D technology are converging to change the way we are entertained, educated, buy, sell and explore.

Our own team is comprised of computer vision, 3D imaging, web application, gaming, user interface / experience design, and mobile product expertise. Our focus has been on powering incredibly rich and robust end user experiences. We think that a native device approach is essential for integration with new products, but we also see incredible potential in nascent frameworks like http://threejs.org/ and emergent CSS 3D Rendering, which are beginning to allow dynamic computer vision-linked 3D content to be provisioned on the fly via the web to a handset and without plugins (with computer vision bundled into mobile OS). This will allow for the widest range of developers to leverage their existing experience into a new medium.

In a recent interview we gave a prediction of a 5 year countdown to computer vision powered experiences becoming common place on everyday mobile devices. It still feels bold, but if there was one thing we learnt from the adoption of the web, waiting for an inflection point can be like waiting for godot, but once it happens, the rate astounds even the most bullish optimists. We are already beginning to see new games and services coming to market that merge the digital and physical worlds in ways that were once the preserve of sci-fi or complex special effects, and if our own experience is anything to go by, this is not about to slow down any time soon.

This does not of course make for an easy journey, but certainly an exciting and rewarding one.

Thanks for reading, and happy skating.

* [UPDATE] I just saw that Mark Suster also has a great post which gives you an investor perspective on the same topic. It is well worth reading.

Ar gaming connecting online and offline worlds. Obvious EngineThe fourth wall is an imaginary boundary that separates an audience from the content they are observing, and vice versa. This same concept holds for our interaction with the digital world. The content lives within a bubble that has kept the online world divorced from a user’s own physical offline context.

We have now reached an inflection point where this is changing forever. The merging of our experiences in the online world with our presence in the offline world is happening on many levels, and at an unprecedented velocity.

The offline world is getting smarter. Everything from drinks machines to bus-stops are sensor powered and connected into a larger, intelligent online system. This increase of online and offline interconnection has only just begun.

At Obvious Engineering we’ve been thinking about how computer vision will play a pivotal role in this process of linking together the two worlds we currently inhabit. We think the real game-changer is that we all carry around ‘magical glass slabs’ that offer a window into the space where both of these worlds are merged.

Through better spatial recognition and understanding we can begin to integrate new levels of intelligence with what were once considered inanimate objects. For the end-user, the entire world becomes interactive in both a digital and physical manner.

For product makers there is a new opportunity to digitally link stories, utility and experiences directly with your objects within the real world. For content makers such as gaming studios, this means that the world will become interactive in a way that the Wii and Kinect have only just begun to scratch. Content becomes responsive to space, and with the advent of 3D printing, we can see a future where space becomes responsive to digital content also.

As form-factor changes over time, augmented reality with grow further in tune with the tactile way we experience the physical world.

You only have to look at how a young child expects the world to be interactive in a way their parents never did, to understand that this merging of physical and digital, online and offline, is already in full swing.

In very simple terms, a web presence was once a novelty, a thing you did not have to have for either personal or business reasons. It was something for people in lab coats. In a relatively short period of time, the world as we know it has been both disrupted and evolved by the mainstreaming of this medium and the technology that has powered it.

The same rules will apply for objects and products having digital augmentation.