Augmented reality is a big emerging area in tech right now, yet it is one which not too many developers are chasing yet. Rob Manson is CEO and co-founder of BuildAR and has been working with his team on developing awe.js – an entirely browser-based JavaScript library to help developers build for the Augmented Web.

I’d previously worked with awe.js in an AR and IoT based Google Cardboard demo over at SitePoint (links to that can be found in the article) and found the whole platform really nice to develop for, even in its early stages. Rob was one of the first people I thought of as a talented guy with plenty of insights into this space for our first AR related interview at Dev Diner. He was kind enough to let me ask him some questions on the AR ecosystem and how developers can get involved.

How did awe.js get started? Where did it all come about?

We’ve been working with AR since about 2007. And since about 2009 we’ve had a vision that all of this AR awesomeness should be able to work on the web platform. Here’s an early diagram we proposed back in 2010.

It’s pretty primitive looking back – but it’s also interesting how close we were.

In fact we were able to write a capability detection based test harness which used to be available at isweb3here.com (we’ve since let that domain lapse and someone else bought it). The fascinating thing here was that we could write this visual test harness about 10-12 months before any browser in the wild was able to pass it. We saw our first version of Opera on Android pass it and soon Chrome and Firefox on both Android and Desktop OSes soon followed.

What background knowledge would you recommend for developers looking to get into AR development and do well in it? Are there certain types of mathematics, physics, programming concepts or other areas of knowledge that you’d recommend?

Depends what you want to focus on. I can highly recommend learning projective geometry and linear algebra. In fact the courses on linear algebra that are available on Khan Academy are awesome and Sol is an amazing educator.

This then opens the door to learning more about 3D formats, computer vision, signal processing and even on into the world of Deep Neural Networks.

Of course, there’s plenty of room for people that just want to focus on interaction design too and that’s a great area for JavaScript developers to explore. There’s also a fascinating range of cognitive research that’s booming in this area.

Our view is that JavaScript and web standards are the place that makes the most sense to focus though – but of course we’re biased!

What use cases are most exciting to you in the world of AR? Where is its biggest potential?

Things that are useful. Where the value exchange of “the effort put in” by the user is exceeded by the “value they perceive”. Unfortunately there are still very few real examples of this. I also think that the Augmented Web is much broader than just AR as it covers VR, 3D scenes and a range of other sensor based interactions. Most users just want “digital magic” and don’t really care what it’s called.

Personally I also think the cognitive research side is fascinating. Here’s an old research project summary I published that gives an overview of a range of this material.

What is your favourite use of awe.js so far? What made it so
effective or memorable?

We’ve had people using it to make educational content and musical toys. The IoT visualisation you did and a range of cultural content too. The thing we’re really waiting for is for the standards implementation to stabilise (phase 1) and then Apple to adopt getUserMedia on iOS (phase 2).

Recently there’s been a few bumpy issues with video processing – for instance on Android, Firefox currently renders video onto the 2D canvas upside down – this will be fixed as of Firefox 40. And Chrome on Android currently doesn’t render video as textures at all.

How does awe.js compare to AR headset systems like Meta, Microsoft Hololens and Magic Leap? Are there situations in which awe.js is more effective and vise versa?

We should be able to work relatively seamlessly with any open headset. For instance we already work well with any of the Cardboard format devices and things like the Zeiss VR One. You can also use the GearVR if you don’t fully seat the micro-usb connector. Any developer interested in this space should also follow all the awesome work being done under the WebVR list too.

I’m also pretty horrified by the way Oculus is trying to build an iOS-like locked down environment. They probably have the dollars and market might now (with Facebook’s backing) to try this sort of thing. But it’s anti-open and just plain wrong-headed.

I think Hololens is fascinating too and the ability to anchor AR content to the space around you is critical. But for Microsoft to aim to ship 1 Billion of these is a VERY big ask. By contrast there are already over 600 Million Anrdoid devices that support awe.js. If you add in fixed display devices like laptops and desktops this takes us well over 1 Billion already. Obviously we’re no Microsoft or Facebook – but we are delivering a solution that lets you deliver AR to a massive audience right now. And of course that doesn’t require any downloads or new devices.

What were the biggest challenges in developing awe.js?

OMG I have a list!

First, this is a broad domain that crosses so many disciplines it often makes my head hurt.

Second, it’s almost impossible to plan around when web standards will be stable and adopted.

Third, the stabilisation process is very time consuming and even the tests that the browser vendors are using don’t capture all the subtle interactions between standards that we rely on (e.g. the video issues described above).

And of course – life gets a little lumpy some times so keeping the team trucking along on our development roadmap is always challenging.

How does your team make UX decisions in relation to AR interfaces? Do you have any tips in this area?

We focus on prototyping and creating the experiences first before we build them out.

We also focus on how “close” a user feels to an experience. You can measure this distance in a number of ways – number of steps involved to access it, perceived network speed and so on. This is based on an old strategy we’ve been working with since around 2007.

If someone wants to get started developing with awe.js, where is the best place for them to go online to learn?

This is our weakest spot at the moment. awe.js has really been focused on making our internal dev jobs easier – sharing it with the broader world has followed that – and publishing some nice documentation is still on our team task list. But we are working on this.

There’s also nice people like you publishing interesting examples. (Friendly link once again from Dev Diner: here’s that example over on SitePoint!)

We are also in the process of preparing a release that will add support for a whole range of other 3D formats plus some really useful API updates and bug fixes.

Any final words for developers considering developing AR applications?

Get your hands dirty and start creating prototypes. Try to use open standards if you can (e.g. the Augmented Web). And audio is a really under-utilised modality in AR (NOTE: awe.js already supported 3D soundscapes using Web Audio).

A huge thank you to Rob for taking the time to answer these questions in our first ever Dev Diner interview! You can find him on Twitter at @nambor and find awe.js at the awe.js GitHub page.

Know other emerging tech enthusiasts who might want to read this too? Please like and share this post with them!

Would you like to republish this article in your own publication?
Contact Dev Diner to request official republication of this article.

Comments

Great Article, I’m just beginning my use of Awe.js, waiting for iOS to come around.

Leave a Reply

Your email address will not be published. Required fields are marked *

Want more?

Voice interfaces got you confused?

Learn to build for the Amazon Echo, Google Home, Facebook Messenger, Slack & more!