The world of artificial intelligence is an intimidating one for many developers out there. Even just understanding where we are amongst it all can be a tough challenge. I recently spent time working out how to explain AI to a crowd of web developers for Web Directions Code and thought I’d turn my talk into an article series! Here’s part one, a look at where things are right now, where we’re headed and why we urgently need to discuss AI’s future today.

Obama generated by AI and the exponential growth of AI

It’s an interesting time in AI!

AI is learning faster than you might realise

Let’s do a quick test. Which of the following videos is NOT generated by AI?

Difficult test?

It was a trick question — they all are. If you keep watching that video (and go back to the very start) you’ll see they are all versions generated by machine learning!

However, there are still some limitations, at least within publically shown AI capabilities (chances are that government/military/research groups have unreleased capabilities too!). Generating audio that mimics people isn’t quite as realistic — yet!

That is generated speech from a startup called Lyrebird, who are looking to provide this as a service for developers to use.  You’ll be able to record 1 minute of someone’s voice and Lyrebird will compress their voice’s DNA into a “unique key” that you can use to make them say anything. At the moment it does still sound a little robotic but it is surely only a matter of time and more data before their service comes really really accurate. It even lets you control the emotion of the generated voice.

Why now?

Some out there are pretty sceptical of AI, it’s been a big buzzword in the past and then appeared to fizzle out. Some question whether it’s just a repeat of those past fizzles. Why is artificial intelligence having this resurgence now? Is this the era where AI will truly emerge? Or another false flag? Right now, there are a few big factors that are bringing artificial intelligence back into the forefront of emerging tech,

1. We have so much data

“Every day, we create 2.5 quintillion bytes of data. To put that into perspective, 90 percent of the data in the world today has been created in the last two years alone.” — IBM Marketing Cloud trends for 2017

That’s a whole lot of data right there. In fact, every two years, we appear to generate 10 times as much data. That’s a fantastic thing for artificial intelligence as there’s one important factor in building really effective AI — data. Good AI needs good data.

That data is exactly what Samsung was missing this year when trying to release their Bixby voice assistant on the Samsung Galaxy S8. They had the data to launch it successfully in Korean, but not enough data for the English speaking version to release it on time. After a bit of a delay, they released it late and in a beta state. The Korea Herald say that “the problem has come because Samsung started its big data mining long after its rivals”. Amazon is on the lookout for an Aussie linquist right now to join their data team and likely help bring the Amazon Echo to Australia. There’s just no substitute for good quality data.

That’s the very reason why Google can release their software libraries for AI such as TensorFlow without worrying too much about competition using it. The value is focused largely on the data the AI can access and use.

I had put together a chatbot demo for Web Directions Code that could answer questions about the event such as “What’s on Day One?” and “Who is (speaker name)?”. However, this demo had its own missing big data — I didn’t have data on what people will ask at a conference before making it. I attempted my own method of collecting data by asking on Twitter, however that didn’t quite give me a whole lot to work with. Instead, I worked to adjust the bot to common questions in real time as the event went by. We’ll be using a more developed version of the chatbot for Web Directions AI conference in September — this bot will be more effective as I’ve got a bit of data on what people commonly ask now!

2. GPUs are pretty great now

It’s really fascinating how emerging tech develops. Who’d have thought that the proliferation of smartphones would help drop the price of components that enable VR and AR? Likewise, who’d have thought that our increased graphics capabilities would lead to benefitting the field of artificial intelligence? Right now, thanks to the seriously impressive feats of modern day GPUs, parallel processing is faster, cheaper, and more powerful than ever. This has been incredibly valuable to enabling deep learning techniques that previously weren’t feasible on older hardware.

3. The cloud is a wonderful thing

Need storage for all that big data? You don’t need to clear out your garage and turn it into a server farm — Google, IBM, Amazon, Microsoft and so many others will give you all the power and storage you’ll need!

4. Computing power overall is getting cheaper

You know how great we just said that GPUs have become? The industry is already looking to go beyond those too and towards AI-specific chips. Microsoft researchers in India recently achieved something previously unheard of:

“Varma’s team in India and Microsoft researchers in Redmond, Washington, (the entire project is led by lead researcher Ofer Dekel) have figured out how to compress neural networks, the synapses of Machine Learning, down from 32 bits to, sometimes, a single bit and run them on a $10 Raspberry Pi, a low-powered, credit-card-sized computer with a handful of ports and no screen” — Mashable

Basically, that’s bringing a whole complex AI system and potentially running it on a $10 Raspberry Pi!

There’s a lot that’s possible with smaller AI chips. Sticking with Microsoft as an example (their teams are doing some groundbreaking stuff!), their HoloLens team are using a custom AI chip for their next HoloLens headset. This brings along the capability for faster, on-device AI systems. No cloud needed. For devices like the HoloLens, speedy AI is crucial!

A HPU floating in the air

(Source: Microsoft and The Verge)

HoloLens aren’t the only ones using AI for augmented reality tracking, the team at Magic Leap are also using it to help them improve their SLAM (simultaneous localization and mapping algorithm) algorithm to more accurately and realistically place and track virtual objects. Using AI in this way allows their tracking to be “fast and lean, easily running 30+ FPS on a single CPU”. So, even in the field of augmented reality, artificial intelligence seems to be a pretty big factor in terms of success in the coming years.

I think my Amazon Echo looked at me funny — Let’s talk about our AI apocalypse.

Jobs, jobs, jobs. We need those.

One of the main concerns often brought up around AI is the huge loss of jobs that might entail with an AI-dominated world. In the UK, it turns out that “technology has potentially contributed to the loss of approximately 800,000 lower-skilled jobs” according to a Deloitte report. However, that report found that almost 3.5 million new ones have been created.

“On average, each job created is paid approx £10,000 per annum more than the lower-skilled, routine jobs they replace, resulting in a £140 billion net boost to the economy.” — Deloitte, “From Brawn to Brains: The Impact of Technology on Jobs in the UK”, 2015

Does that mean that AI will do the same thing? That’s hard to know for sure. I hope so and prefer to remain positive that society and its roles will adjust as they have in the past. It’s important to note that we need to prepare and monitor the effects of AI today to ensure we’ve got it under control throughout the years to come. It doesn’t have to be doom and gloom. Will it always create that many more jobs? Probably not. That’s a whole lot of jobs mentioned in that Deloitte report. AI is very likely to be a different sort of tech to that of the iPhone and other emerging tech that popped up in the UK in the past 15 years.

Exponential singularity madness

Aside from the jobs bit, there is another reason that some people are concerned — technology is advancing at an exponential rate, so the impact of its growth is really hard to predict. As Tim Urban explains in his look at artificial intelligence, humans expect growth to happen at a consistent rate. They look back and see that technology took X years to get somewhere, so the next technology will take the same amount of time to advance. However, it really grows exponentially, rapidly improving much faster than many expect. When it comes to AI, this rapid growth could come as a big shock to the masses who don’t realise just how small of a jump it might be to superintelligence:

“The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us” — Tim Urban, Wait But Why, 2015

In fact, Elon Musk sees the exponential growth as being two-fold:

That’s a lot of growth. Elon is pretty worried. His level of worry is beyond that of many others in the industry who think he doesn’t quite understand the nature of AI and its progress. However, I’m moreso on Elon’s camp with this one. The exponential growth is likely to take us by surprise if we don’t plan for it ahead of time. The advancements as mentioned earlier in GPUs and AI chips are likely to spur on more innovation and new players in the space, which will continue to spur on advancements in software too… and so on. The interest in AI is heating up and the effects of that shouldn’t be underestimated.

“It’s going to be a real big deal, and it’s going to come on like a tidal wave” — Elon Musk, 2017

The aforementioned Wait But Why article on AI has a picture that explains our current position in a way that there’s no way I could top:

A diagram showing human progress shooting up like crazy right where we are standing.

The perfect depiction of where we are right now (Source: Wait But Why)

Tim points out that from where we are right now, things look like this:

The last graph but without the crazy sharp inline at the end as we can't see that yet

What it looks like for us (Source: Wait But Why)

 

That is the big shock concept I’m talking about. For us right now, things seem to be advancing fast but not fast enough that we’ll be engulfed in an AI dominated future that soon. Taking into account the nature of exponential growth though… things might get out of hand mighty quick. We’ve already simulated a flatworm brain in 2014 and a simplified version of a mouse brain back in 2015. This highlighted part of the graph below could very well be our current lifetime in this process (that weird blob is a flatworm):

A graph showing exponential growth

That tiny range between flatworm, mice and human intelligence in the scheme of things.

Can you compete without AI?

Artificial intelligence, even to the extent we see it today, is still making waves in the tech world. We’re achieving big things! Google Translate improved more in the last year than it did cumulatively over the past 10 years when it introduced deep learning. That’s a whole 10 years worth of progress matched in a single year. That’s already happening. Want to compete in this era of AI? You’ll need to be bringing in AI too. In fact, Facebook just announced their translation has begun using neural networks. That’s not something happening soon. It’s already happening. As the VP of engineering at eBay said earlier this year:

“If you’re not doing AI today, don’t expect to be around in a few years” — Japjit Tulsi, VP of engineering at eBay, 2017

Things are already moving fast

OpenAI, a startup founded partially by Elon Musk, recently took on the world’s best Dota 2 players in a video game match and won.

“Engineers from the nonprofit say the bot learned enough to beat Dota 2 pros in just two weeks of real-time learning, though in that training period they say it amassed “lifetimes” of experience, likely using a neural network judging by the company’s prior efforts. Musk is hailing the achievement as the first time artificial intelligence has been able to beat pros in competitive e-sports.” — The Verge

That’s a pretty big deal! While there’s still more to come and this is really still narrow AI focused on one task (I’ll cover the different types of AI in the next article in this series), it’s an achievement worth noting, especially with the short amount of time needed to train it up.

What’s the “Singularity”?

Eventually, with this exponential growth in the capabilities of technology, unenhanced human intelligence will be unable to match it. That point when machine intelligence surpasses human intelligence — that’s the Singularity. This is a time people are concerned about because… we really don’t know what happens then. Just like a cat, while incredibly cute, cannot truly theorise about what the human population might do in the future, how can we theorise about what AI will do once its thinking is more advanced than ours?

The same diagram as earlier but with the top highlighted after it intersects with human intelligence

This bit here is the Singularity.

How long till then? Some say around 2040-2045… but others say never. Some don’t think we’ll get there. They don’t think our tech will reach that level of intelligence. I think we’re plenty capable of getting there, the biggest danger is that we’ll get there in a really haphazard way. If that AI is developed with the wrong focus or to achieve a task that causes harm to others or the world… will it be possible to stop it?

Wait.

We might already be there.

7 News story on Facebook's AI entitled Artificial Intelligence Emergency

Oh no. It’s happening.

You saw it right?

Zoomed in version of that headline

Yep. We’re doomed.

Yep. Headlines show the true gravity of the situation here. Headlines including “Facebook shuts down robots after they invent their own language” and “Facebook AI shut down after it starts speaking in its own made up language”. Terrifying stuff. Singularity worthy?

Wait.

Let’s check WIRED.

“No, Facebook’s Chatbots Will Not Take Over the World”

Mashable were getting downright annoyed…

“Stop saying Facebook’s bots ‘invented’ a new language”

A screenshot of the headline by Mashable

Mashable’s headline

Subscribers of the Dev Diner newsletter might have recognised the story from a week or two earlier. It was all about trying to train AI to negotiate. Two chatbots were learning to negotiate and started adjusting their sentences as a faster way of communicating it to each other. Those sentences looked like so:

Bob: i can i i everything else . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to me
Bob: you i everything else . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to
Bob: i . . . . . . . . . . . . . . . . . . .

Not exactly an emergency. Facebook did stop the experiment, not because they feared for their lives against a rogue AI program, but because the AI wasn’t negotiating effectively at all. It wasn’t a dramatic moment where they rushed to unplug the power before all hell breaks loose. So never fear, the human race continues for another day!

AI developing their own ways of communicating is pretty impressive overall. It does, however, bring up a very good, yet scary, question that Fast Company originally raised — should we allow AI to invent languages we don’t understand? Is the way that deep learning works (I’ll be covering that in the next article in this series) already mysterious enough that generated languages is just too much? These are questions that we’ll need to be asking as a society — but in a calm fashion which isn’t screaming about an “artificial intelligence emergency”.

The danger won’t necessarily be like the movies

An out of control AI future won’t necessarily be about robots shooting us down in a blaze of tech glory like the movies. It might not necessarily be as creepy as bots speaking their own language. It could just be as simple as computer viruses spreading even worse than ever before. Imagine viruses which adapt to avoid detection… just recently at Black Hat USA 2017 one talk discussed malware spreading in this very way — “Bot vs. Bot: Evading Machine Learning Malware Detection”. One DEF CON talk this year entitled “Weaponizing Machine Learning: Humanity Was Overrated Anyway” discussed an open source bot they created called DeepHack which “learns how to break into web applications using a neural network, trial-and-error, and a frightening disregard for humankind”.

What if someone tricks an AI? Researchers have already successfully tricked AI into seeing the wrong things.

This is stuff that is already a danger today. No robot overlords required.

Or, what if someone accidentally does something horribly stupid? As developers, we aren’t perfect. We make mistakes. Oversights. There’s every chance that’ll happen with AI too.

Hugging Face is a perfect example of when things go wrong with AI in a totally unexpected way. Hugging Face is an “AI who learns to chit-chat, talks sassy and trades selfies with you”. One day, their team found some pretty alarming Twilio charges from their bot:

Their alarming balance of -$1,580.52933

(Source: Hugging Face)

It turns out they’d introduced a new feature where the AI could prank your friends with a text. One user decided to prank a very particular friend — their Hugging Face AI friend.

The child asking to prank their bot friend

(Source: Hugging Face)

That began a series of messages back and forth between the two bots:

The bots messages

(Source: Hugging Face)

That gets kinda creepy… I mean — “It’s no secret that the both of us are running out of time”??? That’s way more ominious than those Facebook AI messages. What does that even mean?

They ended up chatting for an hour… with 15 messages every second. That’s a lot of messages.

What if rather than taking over, they just fail… when we need them most?

What if we rely on AI and it just doesn’t hold up when we need it to?

This robot recently decided it’d had enough on its job as DC security:

DC security robot quits job by drowning itself in fountain

(Source: The Verge)

I thought this headline choice was brilliant:

DC security robot says everything is fine, throws itself into pool

(Source: Engadget)

So, in the end, there are plenty of ways that AI might actually cause issues that won’t necessarily be the robots taking over the world that you see in the movies. We need to be considering these and asking questions about these areas, rather than focusing solely on the Hollywood-style robot apocolypse that so many still worry about today. This is an issue that we need people at all areas to discuss and work together on — governments, engineers, researchers, developers, the community at large… it is going to affect everyone and it needs serious attention today.

Thanks for reading! In my next article in this series, I’ll explore the various terms in artificial intelligence and what they all mean!

Keen on learning to make your own voice assistants? I’ve got an online course in the works on how to do just that with Api.ai — register your interest here to get a discount when it comes out!

Thanks for reading! Dev Diner is a new hub for developers keen to keep up with emerging tech.
Know others who might want to read it too? Please like and share this post with them!

Would you like to republish this article in your own publication?
Contact Dev Diner to request official republication of this article.

Leave a Reply

Your email address will not be published. Required fields are marked *

Want more?