"It shouldn’t be a case of ‘ROBOT SAYS NO’"

Mireille Schaap

Will robots be taking over more of our jobs? Will they be able to make moral decisions? Three scientists share their vision and explain why we’re ready for change and shouldn’t be fearful.

TEXT: Pauline Bijster
ILLUSTRATION: Mireille Schaap

Stefano Puntoni is a professor in marketing at Rotterdam School of Management (RSM). He researchers and teaches on brand management, marketing strategy, consumer behaviour, and the role artificial intelligence (A.I.) has in all of this: man versus machine.

"One of my research projects is on how people feel about the fact that machines are taking
over existing jobs. This has been happening for a while – a lot of factory workers have been replaced by robots over the years. But now we’re reaching a new level: office jobs are facing the same future. Administration, for example, can in part be done digitally,
which means accountants aren’t in as high demand. Some growing companies have to keep need fewer marketeers on learning.’ because of how much can be computerised. Even the work of radiologists in hospitals can be replaced as such. We’ll always need accountants and radiologists, but simply fewer of them. Machines will also get better at doing cognitive work – for example, reserving a table at a restaurant in advance, or sending an email in your name. Look at the Google- project Duplex.
The pessimist would say: but what would all these people do? From an economical point of view you could say that throughout history, there’s always been more of an increase in jobs, rather than a decrease. This will happen again. But the jobs of the future will certainly look different. Employees of the future will need a different kind of skill set, more than just being able to ‘do something right’, or being able to think logically. They’ll need a different mindset. They’ll need to learn how to deal with change, be flexible, be prepared have to keep on learning. And while we don’t know exactly what shape these future jobs will take several decades from now, qualities such as ‘leadership’ and ‘teamwork’ will become increasingly important in the workforce. The same goes for personal development."

"People should be prepared to have to keep on learning"

Katharina Bauer is assistant professor in practical philosophy at Erasmus School of Philosophy (ESPhil). She researches ‘human enhancement’: what does our ideal and perfect human look like? And how altogether desirable is it at all to be perfect? She looks into robotisation of the human as seen in cyborgism.

"Robots are often anthropomorphic: they integrate with humans and take over human tasks. In some cases they can entirely replace the human. But an important idea in moral philosophy is that humans can’t be substituted because of their individuality and their moral status. It’s an interesting issue: what’s the moral status of a robot, when that robot kind of takes over the role of a person? Another interesting question is: can we programme machines in a way that’ll make them act morally, to have them make decisions or even make moral judgements? Can they develop a consciousness? Oftentimes the robot is used to contrast humanity – sometimes as a partner, sometimes as a subject, sometimes as an opponent, and sometimes also as a certain mirror image. When we think about robots, we’re actually thinking about ourselves: the signifiers that make humans human, and that’s very interesting."

"What are the signifiers that make humans human?"

Jos de Mul is professor of philosophical anthropology at Erasmus School of Philosophy (ESPhil). He has a keen interest in the relationship between man and technology, and his research looks into how information technology – such as artificial intelligence and robotica – is changing our ways of life and our human self-image.

"Humans can maintain different relationships with technology. Tools can be an extension of human action. Like a hammer, for example. Tools can give us information in relation to our environment that is beyond human capacity, like an infra-red camera. They can perform a clear, given task independently: a washing machine. Or they can do their work quietly and in the background: central heating. Finally, they can also become our certain 'other', an opponent: the chess computer. Robots, however, we tend to consider as an 'other' at all times, as Stefano and Katharina have explained. And as Katharina rightfully indicated – the robot can take up many different roles. The reason we consider robots as such, is, I think, due to the fact that their programming gives them a certain degree of autonomy when they interact with their environment. My vacuum cleaner chooses the most efficient route to go through the house, and it does this on its own.
The question is whether we should also grant them moral autonomy. Should we task drones with choosing whether to kill a suspected enemy? Should we task self-driving cars with deciding to drive into a pedestrian or crash itself into a tree? Maybe it’s wiser, especially when it comes to big moral decisions like that, to consider the robot as an extension of human action – but still keep a person at the proverbial wheel. By all means, let’s still keep humanity responsible. 'Robot says no' should never be the final words."

"By all means, let’s keep humanity responsible"

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes