I’d previously worked with awe.js in an AR and IoT based Google Cardboard demo over at SitePoint (links to that can be found in the article) and found the whole platform really nice to develop for, even in its early stages. Rob was one of the first people I thought of as a talented guy with plenty of insights into this space for our first AR related interview at Dev Diner. He was kind enough to let me ask him some questions on the AR ecosystem and how developers can get involved.
We’ve been working with AR since about 2007. And since about 2009 we’ve had a vision that all of this AR awesomeness should be able to work on the web platform. Here’s an early diagram we proposed back in 2010.
It’s pretty primitive looking back – but it’s also interesting how close we were.
In fact we were able to write a capability detection based test harness which used to be available at isweb3here.com (we’ve since let that domain lapse and someone else bought it). The fascinating thing here was that we could write this visual test harness about 10-12 months before any browser in the wild was able to pass it. We saw our first version of Opera on Android pass it and soon Chrome and Firefox on both Android and Desktop OSes soon followed.
Depends what you want to focus on. I can highly recommend learning projective geometry and linear algebra. In fact the courses on linear algebra that are available on Khan Academy are awesome and Sol is an amazing educator.
This then opens the door to learning more about 3D formats, computer vision, signal processing and even on into the world of Deep Neural Networks.
Things that are useful. Where the value exchange of “the effort put in” by the user is exceeded by the “value they perceive”. Unfortunately there are still very few real examples of this. I also think that the Augmented Web is much broader than just AR as it covers VR, 3D scenes and a range of other sensor based interactions. Most users just want “digital magic” and don’t really care what it’s called.
Personally I also think the cognitive research side is fascinating. Here’s an old research project summary I published that gives an overview of a range of this material.
We’ve had people using it to make educational content and musical toys. The IoT visualisation you did and a range of cultural content too. The thing we’re really waiting for is for the standards implementation to stabilise (phase 1) and then Apple to adopt getUserMedia on iOS (phase 2).
Recently there’s been a few bumpy issues with video processing – for instance on Android, Firefox currently renders video onto the 2D canvas upside down – this will be fixed as of Firefox 40. And Chrome on Android currently doesn’t render video as textures at all.
We should be able to work relatively seamlessly with any open headset. For instance we already work well with any of the Cardboard format devices and things like the Zeiss VR One. You can also use the GearVR if you don’t fully seat the micro-usb connector. Any developer interested in this space should also follow all the awesome work being done under the WebVR list too.
I’m also pretty horrified by the way Oculus is trying to build an iOS-like locked down environment. They probably have the dollars and market might now (with Facebook’s backing) to try this sort of thing. But it’s anti-open and just plain wrong-headed.
I think Hololens is fascinating too and the ability to anchor AR content to the space around you is critical. But for Microsoft to aim to ship 1 Billion of these is a VERY big ask. By contrast there are already over 600 Million Anrdoid devices that support awe.js. If you add in fixed display devices like laptops and desktops this takes us well over 1 Billion already. Obviously we’re no Microsoft or Facebook – but we are delivering a solution that lets you deliver AR to a massive audience right now. And of course that doesn’t require any downloads or new devices.
OMG I have a list!
First, this is a broad domain that crosses so many disciplines it often makes my head hurt.
Second, it’s almost impossible to plan around when web standards will be stable and adopted.
Third, the stabilisation process is very time consuming and even the tests that the browser vendors are using don’t capture all the subtle interactions between standards that we rely on (e.g. the video issues described above).
And of course – life gets a little lumpy some times so keeping the team trucking along on our development roadmap is always challenging.
We focus on prototyping and creating the experiences first before we build them out.
We also focus on how “close” a user feels to an experience. You can measure this distance in a number of ways – number of steps involved to access it, perceived network speed and so on. This is based on an old strategy we’ve been working with since around 2007.
This is our weakest spot at the moment. awe.js has really been focused on making our internal dev jobs easier – sharing it with the broader world has followed that – and publishing some nice documentation is still on our team task list. But we are working on this.
There’s also nice people like you publishing interesting examples. (Friendly link once again from Dev Diner: here’s that example over on SitePoint!)
We are also in the process of preparing a release that will add support for a whole range of other 3D formats plus some really useful API updates and bug fixes.
Get your hands dirty and start creating prototypes. Try to use open standards if you can (e.g. the Augmented Web). And audio is a really under-utilised modality in AR (NOTE: awe.js already supported 3D soundscapes using Web Audio).
Learn to build for the Amazon Echo, Google Home, Facebook Messenger, Slack & more!