For a law of ‘algorithmic justice at work’

by José Varela on 9th July 2020
Workers must be protected from adverse decisions where responsibility is displaced to apparently anonymous algorithms. More and more companies delegate many of their responsibilities as an employer to algorithms, separating the human factor from labour management and exchanging it for computer programmes.
The recruitment of staff, the organisation of working time, professional promotion and the allocation of bonuses—even the application of a disciplinary regime—are all being put at the disposal of algorithms. This trend poses a severe risk to the rights and freedoms of workers.
Digital platforms already manifest this threat: their algorithms control and monitor their workers, evaluate their performance, determine their remuneration and even execute layoffs—and under abstruse, capricious and opaque criteria. Many attributes to these computer programmes characteristics science rejects, such as infallibility, neutrality and superficial precision.
Not infallible
Algorithms are not infallible. Their decisions can be as biased as those of any human; therefore, their resolutions cannot be considered superior or more objective. This is confirmed by studies from Princeton University, which evaluate algorithms as ‘very unreliable’ when applied to work environments. A resolution of the Council of Europe warns against ‘triggering additional interferences with the exercise of human rights in multiple ways’. The World Economic Forum even talks about ‘artificial stupidity’.
Algorithms are not empathetic: they do not understand concepts of humanity or honesty. They do not have a scale of values, nor do they distinguish cultural or social differences. They do not forgive and forget and they are unaware of their own fallibility. They have no common sense.
Even in certain perceptual skills, they demonstrate the capacity of a baby. And they do not correct themselves through considerations of understanding or justice, balance or diversity, ethics or morality. Today, and in the long term, human comprehension is essential for making decisions that are fair and equitable.
Nevertheless, companies continue to implement these tools, so it is necessary to regulate their use and application in labour relations, closely monitoring compliance with human rights and legal obligations to workers. We are not talking about applying a ‘precautionary principle’ to a hypothetical risk: it is much more than a preventive measure. We must prevent repetition of discrimination based on race, gender and ideology—types of discrimination that have already been verified in the operation of many algorithms.
Algorithmic justice
The Spanish trade union UGT is thus advocating for ‘algorithmic justice at work’—a law that regulates the safe use of these computer tools in Europe. Key demands are, in short:
• full application of article 22.1 of the General Data Protection Regulation—the right of each worker not to be subjected to solely algorithmic decisions—and safety from retaliation in exercising this right;
• expansion of guarantees specified in the GDPR, so that decisions based on autonomous computing solutions are explicable, accessible and reliable, the logic applied in each decision is accurately and comprehensibly presented, rights to information and consultation take precedence over other laws (such as those on industrial property) and all workers, and their representatives, are able to exercise these rights;
• measures to promote gender equality and diversity among those responsible for programming and auditing algorithms; and
• clarification of which of the actors involved in an algorithmic decision (employer, software provider, insurer …) is ultimately legally responsible.
Only through legal initiatives such as these can we ensure safe and fair working environments for 21st-century workers. Digital progress is necessary for our economies and for the competitiveness of the European Union and its member states. But this progress cannot be at the expense of workers’ rights nor bypass union responsibilities.
We want progress and we want digitalisation, but not at any price—always with social balances, under fair criteria and with real labour rights.
(José Varela is head of digitalisation at the Unión General de Trabajadores in Spain.)