When machines think for us: Consequences for work and place

by Judith Clifton, Amy Glasmeier and Mia Gray on 14th May 2020 @_mia_gray
The one sure way not to forecast the impact of artificial intelligence technologies is technological determinism. Will artificial intelligence affect how and where we work? To what extent is AI already fundamentally reshaping our relationship to work? Over the last decade, there has been a boom in academic papers, consultancy reports and news articles about these possible effects of AI—creating both utopian and dystopian visions of the future workplace. Despite this proliferation, AI remains an enigma, a newly emerging technology, and its rate of adoption and implications for the structure of work are still only beginning to be understood.
Many studies have tried to answer the question whether AI and automation will create mass unemployment. Depending on the methodologies, approach and countries covered, the answers are wildly different. The Oxford University scholars Frey and Osborne predict that up to 47 per cent of US jobs will be at ‘high risk’ of computerisation by the early 2030s, while a study for the Organisation for Economic Co-operation and Development by Arntz et al asserts that this is too pessimistic, finding only 9 per cent of jobs across the OECD to be automatable.
In a new paper, we argue that the impact of AI on work is not deterministic: it will depend on a range of issues, including place, educational levels, gender and, perhaps most importantly, government policy and firm strategy.
Highly uneven
First, we challenge the commonly held assumption that the effects of AI on work will be homogeneous across a country. Indeed, a growing number of studies argue that the consequences for employment will be highly uneven. Place matters because of the importance of regional sectoral patterns: industrial processes and services are concentrated and delivered in particular areas. At present AI appears to coinhabit locations of pre-existing regional industry agglomerations.
Moreover, despite globalisation, national and local industrial cultures and working practices often vary by place. Different cultural work practices mean that once deployed, the same technology may operate distinctly in diverse environments.
Secondly, education matters. Generally, jobs occupied by less-educated workers are more susceptible to the impacts of AI and automation, compared with better-educated peers performing more complex and discretionary tasks. For example, in the financial and insurance sectors repetitive, data-intensive operations may be more automatable in the US than in the UK, due to the differences in average education levels within these professions. Another example is legal services, where those in paralegal, less-skilled occupations are at most risk of displacement.
Thirdly, it appears men’s jobs are currently more vulnerable to automation—especially those requiring lower educational attainment since these tend to be routine industrial tasks amenable to mechanisation. This may, however, change in the future.
Women dominate many care jobs in ‘high touch’ occupations, where emotional and cognitive labour are significant. These jobs appear more resistant to technological encroachment, as they involve face-to-face work. In the medium term, though, emerging applications aim to augment even these service functions with machine assistance and are likely to interact with and produce new gendered divisions of labour.
Narrow focus
Fourthly, the consequences of AI on work will depend, crucially, on policy and the firm. Acemoglu and Restrepo argue that productivity increases could outweigh the displacement effect of technologies under the ‘right’ type of AI: if governments actively support AI which enhances jobs, rather than AI which seeks to eliminate jobs, the outcome could be positive overall.
To do this well, government also needs to accompany AI with social policy. Governments have started publishing AI policies in the last few years. But a comparative analysis of government AI strategies shows that, to date, the great bulk of policy has focused narrowly on economic gains, with very little attention paid to social issues. Yet understanding the latter is a precondition of societies being able to evaluate, and regulate, new applications of AI.
Firms, too, can opt to promote the ‘right’ type of AI—or not. Meanwhile, they may increasingly turn to AI to support recruitment.
This could be problematic since AI algorithms have been found to contain embedded gender and racial biases. The use of such technologies as facial and voice recognition, automated screening of curricula vitae and targeted profiling may inadvertently reduce the pool of eligible job-seeking applicants in profoundly prejudicial ways. If businesses put these to use for recruitment purposes, the distribution of job opportunities could be profoundly affected, and AI might reproduce pre-existing biases around gender, ethnicity, and class.
Two paths
At its starkest, we see two paths forward. Fuelled by scare tactics and the ‘great unknown’, consulting firms are pushing companies to jump on the AI bandwagon, to avoid becoming economic ‘laggards’. Each consultancy is carving out a niche toward distinct trajectories, from relying on cutting costs to eliminating low-skilled labour—and encouraging government AI policies to focus on economic gains.
Another path is however possible. The potential exists for AI applications to enable the reskilling of existing workforces, thus allowing workers to use their skills alongside new technologies. AI and associated technologies can be used to help transform education and health and, even, attain peace.
There is nothing preordained about how AI will be deployed. The application consequences of these technologies will reflect choices made at the organizational, political and societal levels. The future of AI is too important to be left to technology specialists. Social scientists, lawyers of technology and experts in the ethics of technology need actively to engage in shaping and structuring its development and adoption.
This article is based on a collection of articles on AI and work in the Cambridge Journal of Regions, Economy, and Society, volume 13, Issue 1, 2020.
(Judith Clifton is a professor in the Faculty of Economics and Business Science, University of Cantabria (Spain), and a visiting scholar at St Antony´s College, Oxford. Amy Glasmeier is professor of economic geography and regional planning at the Massachusetts Institute of Technology. Mia Gray is a senior lecturer and fellow of Girton College, Cambridge.)