Opinion

Exclusive: Siri creator Tom Gruber on the future of humanistic AI

"Can it be like a Terminator or Skynet? That's a real concern"
By
Mark Matthews

Tom Gruber is best known as one of the co-founders of Siri, a digital assistant that put AI technologies in our pockets. He has been highly recognised for his industry contributions, notably being listed among the Top Fourth Industrial Revolution Speakers in the world, among other awards and achievements. In this exciting interview, discover what inspired Siri and why Apple’s AI assistant continues to be so popular.

What inspired you to create Siri?

I was one of three founders of Siri. Dag Kittlaus and Adam Cheyer were the other founders, and there was a team of 24 people, so I wasn't the only person involved.

But when we first decided to make the company, in late 2007, we were inspired by something that was already well known to Silicon Valley, since 1987. It was called the Knowledge Navigator video, it was a vision video that Apple put out, showing a virtual assistant on what looked like a little iPad.

It looked like it was doing Siri-like things on a web like internet, which hadn't been invented yet. It had mobile computing, it had Siri, it had the web, all in that vision video. That had a big effect on a lot of people.

The core idea of having a virtual assistant as a user interface was out there in the culture, everyone knew about it. What really got us going in 2007 and early 2008 was the timing. I was an AI guy as a career, and Adam had just come from SRI, this place that was known for amazing technology and demos for years and years, and Dag came from the mobile industry. He had in his hand iPhone one, and Adam had been building an assistant as part of a Java project.

For years, he kind of knew what it meant to make an assistant that could do things for you, and I knew what it meant to have AI and a user experience. We basically said, ‘wow, this is the time! The pieces are coming together, and we have what we need to actually pull off this vision for the first time in history.’ So, we thought we'd give it a shot.

What do you credit to Siri’s industry defining success?

A serious success is due to a lot of things. It was a good team at the right time, we knew how to build it, we knew what needed to be done, also, it was a fairly clear vision as start-ups go.

The idea of a virtual assistant as a new human-computer interaction modality, was a very good vision because it tells you what you need to design. What would an assistant do? Or what kind of tone should this conversational agent have with people who are being abusive to it? Well, what would a professional assistant do?

It was only successful to the scale it was because Apple bought it. Steve Jobs gave us an offer and bought the company. We chose to work with Steve because we believed in the vision that Apple had which was customer first and economic success second.

So, we optimised entirely for this delightful way of interacting with a little mobile phone, and that aligned everybody in the project unto the same North Star. That kind of thing doesn't happen often in in the history of tech, and when it does, a lot of big things can happen. I think that's partly why Siri was successful.

What are the guiding principles of humanistic AI?

Humanistic AI is a philosophy of AI. You hear a lot about machine intelligence as the goal; oftentimes it’s ‘let's do whatever it takes to make the machine exhibit the intelligence to automate the thing we respect about humans - let's build machine versions of us.’ That's a perfectly good thing to do.

The other way of thinking about it is humanistic AI. Why don't we make machines that make humans more intelligent? The difference actually matters. If you pursue the goal of machine intelligence for its own sake, particularly in business, what you end up with is machine intelligence that often competes with humans, for their job or for their attention.

Oftentimes, if you pursue the machine intelligence goal for its own sake, all things being equal in an economic situation, like advertising-based attention economy, you end up with AI being used against humans to compete with humans for their attention. Or in the case of employment, it's often to compete with humans for jobs.

That’s a thing that can make money for some companies, but it's not an effective use of AI to impact humanity. On the other hand, humanistic AI’s inherent design goal is to augment humans to make humans more effective.

Now we're talking about, for instance, automating things that people do at work, we mean the things that are dangerous or tedious or require lots of time that people don't have. That border will change over time as automation of intelligence gets better, more and more menial tasks can go away.

For example, in healthcare there's a lot of things that are done in tedious ways because it's the only way we could do it. Now, AI can do a lot more of that so medical scientist and the health life scientist can think about the theories of health and how we can improve our response to disease - and that's true across the board. It turns out that this idea of augmenting human intelligence versus competing with it has been around for a while.

How do you believe AI will shape our everyday lives?

So many good ways it can help us! AI will be augmenting us in all kinds of ways, it's already beginning to augment us. I mean Siri was an attempt to augment us. When we built Siri, the only way you could use the mobile phone is to tap on the tiny little screen - of course, that's not easy for everybody.

At the same time, on the web, all kinds of cool things were available, like all these services you can do, like travel, booking restaurants with Siri. Well, those things were only possible if you had 10 fingers and a big screen and an internet connection, right? We felt like we wanted to bring the bounty of all that to someone with a mobile phone, just their voice. That’s a kind of augmentation that’s giving us the power of having the 10 fingers and the screen, while you're carrying it around.

Now we see a lot more examples of how AI is augmenting people overcoming disabilities. For example, one of the companies I advise helps people who can't speak because of neurological conditions, like Stephen Hawking. AI understands what they're trying to say by reading the brain waves and interpreting them to speech - that wasn't possible before AI came along to help make sense of that.

The next thing is essentially nurture. Now nurture in the sense of like a mother nurturing an offspring. You know AI today is more like a Big Brother, oftentimes being used against people, but like to think of AI as big Mother, like a big bear mother protecting her cubs... There's a lot of dangerous stuff out there, but what does a mother do? A mother tries to use her skills to make you a better person. Healthier, better self-care, better mental healthcare, better social interaction with peers and so on, that's what the AI can do in the future.

I'm not just making this up. There're many real companies and projects working on those goal, starting with simple things like, wearables watches and rings. They're starting to give us feedback about how well we sleep, how well you focus.

The final thing that AI is going to do is going to do is transform society. It's already beginning to transform it in negative ways, but I think we can turn that around. It's going to transform societies because a couple of things:

One, AI can overcome differences between people that are unimportant. As they say on the internet, you can be any colour, any gender that you want to be. Even things like your cognitive disabilities, whether it's a spectrum emotional thing or it's just IQ, these differences can be shored up when AI is mediating for you.

There’s a more subtle and even more powerful way. Right now, we as individuals haven't really realised our potential as a collective. We don't really think well together today. I mean the only techniques we had for that was the media and politics… those traditional ways in which we would collectively think have been disrupted in a serious way. The one piece of collective intelligence that survives is science.

Ironically enough, the way that AI advances the fastest is because of its use of the scientific method. The point is that there are systems of thinking together like science that do work, and those that don't work, like traditional politics. AI is going to play a big role in mediating many collaborations and solve some of these problems that we need to solve like climate change, that are based on mass solution.

Are there ethical risks to AI and big data, and if so, how should businesses combat them?

There’re huge ethical issues in AI. AI is probably the most powerful technology that's been invented this this century; the CEO of Google calls it the most important invention since fire. He has invested more than any other company in the world in AI. It's very powerful and all-powerful technologies have ethical consequences.

In the case of AI today, there’re people worried about the future - can it be like a Terminator or Skynet? That's a real concern, but it turns out that we have much more pressing real-life problems happening right now because of the power of AI. The most outstanding one is the fact that AI is the engine of optimization behind the big social media platforms like Facebook and YouTube and Snapchat. They've optimised for what's called growth hacking, which is, how much can you get people to stay addicted and stay online. They use the big data that they gather from their users to predict what would work to keep them online and addicted.

So basically, humans don't have a chance against that kind of technology. It's not an issue of free speech or economics. This is a technology that is having impact at human scale, political scale, geopolitical scale, lives are being lost because of misinformation. This is a serious business in the real world.

How can businesses address this? Well, especially in technology, ethics is not a simple matter of do the right thing, have morals. The interesting part is how do you embed values in the very technology that you employ? An example of not doing that is if you make the AI only care about winning in an adversarial game against humans for their attention, that’s going to produce negative human impact.

We want to make sure the business is making money and we want to make sure that the human society is benefiting as well. You can literally engineer those two bottom lines into your equations.

Written by
Mark Matthews