The age of robots could be a new renaissance

by Leonardo Quattrucci
Why have humans created machines? Technology is born from the human desire to save time in order to invest it in fantasy. Technology generates time and space for the activities that make us quintessentially human: creativity and entrepreneurship, inventiveness, and empathy. It serves us by alleviating suffering and creating solutions to grow and fulfil our aspirations – from curing cancer to landing on Mars.But humankind’s most common mistake is to forget their ends, as Nietzsche warned. This is possibly why, in the face of automation, many of us are overwhelmed by fears of a Matrix-like apocalypse. At the opposite extreme, there are those who celebrate with probably too much enthusiasm the promises of artificial intelligence (AI), often overlooking the risks of delegating decisions to entities whose learning and logical process is ever harder to decode.
Today, the possibilities of AI depend on the limitations designed by its creators. Therefore, as AI becomes more pervasive in our daily life, it is imperative to focus on which roles and responsibilities we humans must retain in the age of robots.
A social contract for AI
Alpha Go – the AI developed by Google Deep Mind – performed unexpected and novel moves to beat the best human players at Go, the highly complex Asian board game. However, the question is: were Alpha Go’s moves genuine inventions or were they combinations so sophisticated that other players had not yet envisioned them?
Machines are already superior to humans in terms of computation and memory. At the same time, the applications of AI are a human choice – a political and social one. AI is better than humans at calculating traffic, anticipating the incidence of an epidemic or managing energy efficiency, but the questions it analyses and the biases it manifests in trying to solve them are inherited from its designers: us.
The evolution and diffusion of AI is also a question of social license. We developed norms and laws to regulate social and economic discrimination, the use of drugs and arms, or – more simply – speed limits. In the same way, we should set rules of conduct for autonomous vehicles.
Which criteria are necessary when we choose which decisions to delegate to machines? For every AI that becomes more refined, there needs to be a community of citizens that assesses its social purpose and application.
New standards for a new era
‘Trust your instinct’ is a recommendation that has its limits: the probability that it will yield effective decisions is more or less that of a coin toss. In other words, there is often a gap between our actions and our understanding of how we decided to perform them. Trying to explain it is at best a simplification of our thought – or un-thought – process. We should not be surprised, then, if some learning processes in the human brain are equally cryptic within an AI context.
This is neither to say that we should surrender to such evidence, nor that we should refrain from innovation or limit the entrepreneurial risk-taking that is necessary to advance it. Rather, it requires us to elaborate standards and instruments to manage technological development. A Commission for Artificial Intelligence, for instance, would have a mandate to assess how explicable and transparent are the decision-making processes of machines. Or why not host regular competitions among machines to reveal their biases and fallibility – and test their capacities – the same way we undertake stress tests for banks?
Have you read?
This is what artificial intelligence will look like in 2030, according to one of the world’s leading experts
Top 9 ethical issues in artificial intelligence
The Fourth Industrial Revolution is about empowering people, not the rise of the machines
Humans fit for technology
Giuliano Toraldo di Francia, an Italian physicist, once said: “We need to create technology fit for humans; we also need to create humans fit for technology.” We choose to create machines to specialize in our uniquely human advantages. Reminding ourselves of that should be our first step. The next step is to prepare for the moment when we will have to critically engage in a daily dialogue with AI – some of which we will wear, if not have implanted.
Sounds like science fiction? Smartphones today are already a sort of extension of our minds, and a quasi-permanent extension of our body. We need a ‘philosophy of technology’ that equips us with a behavioural and moral compass in technology-rich environments. As AI becomes an integral part of our public and private lives, we need to establish rights and responsibilities for ‘human-machine citizenship’.
Could the age of robots be a new Renaissance instead?
This article was originally published by LINC Magazine. – Social Europe