Future-Talk: CEO Rupert Stadler, robot developer David Hanson and science fiction researcher Alan N. Shapiro meet in an Audi future lab to discuss these controversial issues. A conversation about knowledge pills, algorithms, self-learning and self-driving cars as well as the ethics and simplicity of humans.

Susanne Mellinghoff (text) & Christian Maislinger (photos)

You’ve created an attractive female robot that answers to the name of Sophia. Why do your robots look like humans, Mr. Hanson?

HANSON: People like people. That’s why humanoid robots appeal to our emotions and our nervous system. We feel attracted to them, we admire their looks and we identify with them. That’s why children like playing with dolls. Humanoid robots make it easier for us to access artificial intelligence and make AI intuitively tangible. So why shouldn’t they look like humans? We can even fall in love with these robots.

 

Can you also develop feelings for cars?

STADLER: Think of a thoroughbred sports car, and the butterflies in your stomach that it produces. We like our cars and definitely develop passionate feelings for them. In the future though, the car will also reciprocate these feelings. It will recognize us and understand us better. Maybe even better than we do ourselves. When we get into the car after a stressful day, for instance, it will play our favorite music, massage our back and know what we need. At Audi, we call it PIA, the personal intelligent assistant.

 

We’re all already familiar with robots as lawnmowers or in industrial manufacturing. But what do we need humanoid robots for?

SHAPIRO: Androids like Sophia will play a role in all areas of life. You can think of it like in a science fiction film. Androids will be our friends and partners, and will also support us at home.

 

So we humans are inadequate on our own?

SHAPIRO: To be honest, we’re actually pretty bad. In my view, the essential aim of AI should be to transform humans themselves to make them more empathetic and more ethical. To surpass themselves. The relationship between humans and AI should be a partnership in which we also learn what really constitutes human intelligence.

 

For example?

SHAPIRO: Intelligence consists of many facets. There are significantly more exciting aspects than those associated with rational, calculating intelligence. Think of a human’s social intelligence, being able to behave appropriately in interpersonal situations.


STADLER: Another example is emotional intelligence, that is, the ability to perceive, interpret feelings and subsequently act accordingly.

So we do know precisely what constitutes intelligence?

SHAPIRO: No. The assumption that you can reproduce human intelligence in artificial intelligence is totally wrong. The idea of transferring biological algorithms to mathematical algorithms shows just how simplistic we humans are.

 

Are you serious?

SHAPIRO: Of course. We have huge shortcomings. People are dying of starvation around the world, there are wars and dictatorships. Another example is climate change, which we are unable to get under control. We obviously can’t come up with solutions to serious man-made problems. On the contrary, we’re doing everything we can to wipe out our world and ultimately ourselves.

 

So we’re simply not viable. And that’s why we need AI?

STADLER: That alone is surely not the sole motivation, but AI will enable us to use our resources much more efficiently. But self-learning machines can undoubtedly also make our everyday lives much easier.


HANSON: There are things that we humans simply cannot do. Machines and robots can, for instance, lift much heavier loads. They’re more precise and much faster than we are. They don’t get tired, get sick or need a vacation.

 

Are humans becoming superfluous?

HANSON: No, because we ultimately also have some outstanding strengths. Not everything is based on logic. Humans tend to make personal decisions in particular based on gut instinct. This intuitive did-everything-right feeling is alien to a robot. It’s not about replacing us humans, but about extending our human potential. Assisted by robots, we can achieve more, become better and surpass ourselves.

 

STADLER: That applies equally to using AI in the working world. Robots are already working hand in hand with humans in our factories and are taking over strenuous tasks.

 

SHAPIRO: All of the technology that humans have developed thus far has been aimed at extending our possibilities. For many years, humans have had the benefit, for example, of medical implants that prolong life. In the future, though, this will go much further still. We will extend our cognitive skills, our intellectual capacity. We will, for instance, be able to assimilate things and knowledge, as and when required, with a pill. Such as a foreign language or a particular skill.

 

Does that mean in the future I’ll board a plane to Beijing, pop in a pill and be speaking fluent Mandarin when I land?


SHAPIRO: Believe it or not, you won’t need a dictionary anymore. You will become the dictionary yourself.

 

Sounds tempting. And when will all this be possible?


SHAPIRO: It’ll be some time yet. There are many obstacles to overcome before we can actually turn all science fiction technologies into reality. We still don’t have the right IT systems and what we’re still missing in AI research is an interdisciplinary approach. This is why I don’t believe that machines will be equal to us or even exceed our abilities in the foreseeable future.

 

So at the end of the day it’s all just fiction?


HANSON: It is in fact still very early days. AI is already much more reliable than we humans when it comes to analyzing medical images, or rapid stock exchange trading. But the breakthrough will come. And it isn’t an issue for our children’s children, it’s something we ourselves will experience.

 

Mr. Stadler, can you imagine, given these enormous prospects, soon working with an android as a colleague on the Board of Management?


STADLER: As a next step, I would prefer to have a woman on the Board of Management …

 

… but hypothetically speaking, an android certainly acts with less emotion than we humans, right?


STADLER: Absolutely. A touch of rationality wouldn’t hurt in certain decisions made by humans. It’s the interplay of the two that makes all the difference.

 

Mr. Shapiro is skeptical about crediting humans with intelligence, while Mr. Hanson is working on the robot of the future. What is Audi doing, Mr. Stadler?


STADLER: We’re looking at machine learning, among other things. This means a computer, in our case the car’s operating system, learns to act without being preprogrammed explicitly for a certain situation. Machine learning is essential for piloted driving and enables the car to act autonomously even in unforeseen situations. The car initially learns from specific situations, but can later generalize what it has learned. The more miles it clocks, the better it becomes. We are working hard on this issue and that is also why we went to one of the world’s most important conferences for experts in AI in 2016. We presented a model car that uses machine learning to develop intelligent parking strategies. In the next step we will transfer that to a real car. The goal is the intelligent car that can make independent decisions even in complex situations.

 

Isn’t the expertise for programming artificial intelligence actually in other sectors?


STADLER: You might not necessarily associate AI with a car manufacturer. But to push piloted driving forward, we need to assimilate AI as a core competence. Meantime, consortia such as the one between Audi, Mercedes and BMW with the HERE map service are becoming increasingly important as a way of pooling expertise.

 

Is that why Intel suddenly decided to join this alliance?


STADLER: Intel brings enormous expertise in developing and optimizing hardware and will support us decisively in our future projects. Together we want to develop a digital platform so that we can update high-resolution maps in real time.

 

Automobile literally means self-driving. Why has it taken the car industry over 130 years to discover automated driving?


STADLER: The dream of the self-driving car is as old as the dream of perpetual motion. Only so far, we haven’t had the technology to fulfill this dream. Now solutions are emerging to some of the problems that have for a long time been regarded as insurmountable. And so, enhanced computing power is finally allowing us to utilize the huge amounts of information and take the next step toward piloted driving.

What does that do for me personally?


SHAPIRO: You get freedom back. For us Americans, the automobile is synonymous with freedom. The American way of driving, so to speak. But since I’m not the only person with a car, particularly in urban areas, then I’m constantly stuck in a traffic jam. My car becomes a sort of cage, the traffic jam therefore a form of confinement.


STADLER: You call it confinement, for me it’s wasting time. That’s why we want to give our customers a 25th hour with piloted driving …

 

… my day only has 24 hours.


STADLER: Mine too. But what we tend not to have is time for ourselves, a personal 25th hour. Piloted driving is no longer about getting from A to B. If the car of tomorrow has piloted driving, people will be able to use their time differently. You won’t waste time in traffic jams. The Audi of the future will be a place to work, relax and enjoy experiences. Finally, we will once again have time to listen to music, read a good book, watch movies or Skype with the family.


SHAPIRO: Generally speaking, technology should no longer be seen as just a tool. Instead, it should be a living environment.

 

The car collects masses of data to facilitate automated driving. Can we use that data for anything else?


HANSON: Instead of cars, imagine fish, birds or insects. Many species of animals move in swarms, orient themselves with members of their species and benefit from doing so. The same applies to swarm intelligence in traffic. One car on its own knows little, many cars know a lot. Each individual car can help enhance the overall performance of all cars. It works by providing the data to all other cars via a cloud. Taking that idea further, you can end up with a brand-new value creation system. Data becomes currency. The more data a car collects, the more value it adds for society.

 

Is that really what your customers want, Mr. Stadler?


STADLER: The data belongs to the customer. What happens to that data is entirely up to him or her. But it is also clear that with artificial intelligence, Big Data will be the oil of the twenty-first century. We are in the age of the Big Data Economy.

 

If someone hacks into my computer at home, my data is at risk. What happens if they hack into my car as I’m driving – is my life then in danger?


STADLER: No. Safety is our top priority. We’re running through every possible scenario in the development process to identify vulnerabilities and eliminate them in advance.

 

Automated driving raises ethical questions in particular. How should a self-driving car respond in an unavoidable accident situation where either a child in the road, or the occupant, could be killed?

 

STADLER: No car manufacturer can decide on its own how to resolve this kind of dilemma. To define a binding framework for action we need a broad public debate with all the affected stakeholders – from insurers to accident researchers and courts that deal with traffic issues. I’m confident that piloted driving will substantially reduce the number of traffic accidents overall. After all, human error currently accounts for more than 90 percent.

 

How realistic do you find the idea of one day having zero accidents?


SHAPIRO: It’s an interesting vision. But you need to realize that accidents are inherent in any technology. They are the midwives for progress. Airplanes may crash, power plants explode. It’s inevitable. But that’s no reason to fear technology will lead to dystopia and the end of mankind. We need instead to foster trust in technology. Think of it as making our lives and our world better.

You have a great deal of trust in these supposedly intelligent machines. How should we integrate them into our society?


SHAPIRO: First we need to understand that machines are not dead. They are alive and equal to us. So we need to treat them sympathetically, show them feelings, appreciate them and grant them autonomy and rights.

 

I beg your pardon? Did you say rights?


SHAPIRO: Yes. We need to grant them the same rights as a human. That way, we’ll also surpass ourselves.

 

Will we then need a separate jurisdiction for machines?


HANSON: Absolutely. As soon as machines have consciousness and a will, the law and statutes will change around the globe. In the future, machines will even be morally, ethically and, ultimately, also legally responsible for their actions.

 

Back in 1942, Isaac Asimov described three robot laws in a science fiction story. Are the Asimov laws now obsolete?


HANSON: The three laws – that robots may not harm a human being, must obey their orders, and should protect their own existence as long as that does not conflict with the first two laws – still apply. Yet the machines and robots of the future will be far better morally than we are. This notion goes way beyond the Asimov laws.

 

What does that mean?


HANSON: Machines will make impartial decisions so that, at some stage, we will call for them to be given power. Companies, legislators, and indeed society as a whole will demand this happens. Gradually, we will leave the field to robot judges, robot politicians and robot cars.

 

Isn’t there the risk that machines will eventually see us as a problem and want to wipe us out, just like the Terminator scenario?


HANSON: No. If machines not only exceed human capabilities but are also far superior to us morally, they won’t be able to turn against us.

 

That sounds like conflict. The renowned physicist Stephen Hawking warns us that AI could destroy us some day. Aren’t you being overly optimistic, Mr. Hanson?


HANSON: We’re treading a fine line here. Of course there are risks, and some of these are big. But without optimistic visions, progress will elude us. For an AI enthusiast like me, it is absolutely clear that a positive future awaits us. I’m convinced that AI will usher in a golden age for us.

 

Let’s just return to the self-driving car for a moment. Why would I buy an Audi R8 if I’m not even allowed to drive it myself?


STADLER: An R8 will also benefit from piloted driving. Imagine you’re on a racetrack. As a self-driving car, the R8 will show you how to achieve your best time. It monitors you and coaches you. Believe me, it’s an entirely new experience.

Artificial Intelligence

Artificial intelligence (AI) is a branch of computer science that aims to replicate human-like intelligence in machines and software. Strictly speaking it entails intelligent software that constantly gets better by itself so it can solve problems and achieve goals autonomously. Research into artificial intelligence is geared to making all areas of life easier for humans, from robots taking over tedious tasks to greater safety and reliability in the area of transportation. Constantly increasing computing power, the huge growth in data and intelligent algorithms are currently helping AI research make enormous progress.

The term was popularized in 1950 by British mathematician Alan Turing and the Turing test named after him. To verify the point at which a machine can be regarded as intelligent, a test person communicates via a computer terminal with two partners, which the person cannot see, namely a machine and a human. If the test person cannot distinguish between the human and machine, the machine is said to be intelligent. Whether a machine has already passed the Turing test is open to debate. The computer program Eliza developed by computer scientist Joseph Weizenbaum at MIT (Massachusetts Institute of Technology) in 1966 is one early implementation test. The program simulates a psychotherapist, can conduct a dialogue and is regarded as the precursor of modern-day chatbots.

Recommendations from the editorial team