Renew's Brogan Meaney asks: do advancements in AI look set to further worsen pre-existing economic inequalities?
We’re no strangers to the chilling cries of robots stealing our jobs. We’re in the midst of a technological revolution, and like during other industry-transforming revolutions that came before, we’re on the brink of a dramatic transition within the workforce -automation.
The fast-paced technological advances within the field of AI have already brought automation to the workplace, with 1.5 million jobs in the UK currently at risk of it. The most commonplace example of automation can be seen inside supermarkets across the country: automated self-checkout tills. As these have already demonstrated, advances within the field of AI will have a dramatic impact on many industries. Although these can mean a more streamlined and efficient workforce, there are other consequences.
For example, the economists Anton Korinek and Joseph E. Stiglitz have argued that economic inequality is one of the main challenges we face in the advancement of workforce technology innovation.
We live in a world where the richest one percent own half of the world's wealth. This results in disparities in life expectancies, seen not only globally, but also at home: in England, the gap in life expectancy between the wealthiest and the most deprived areas can reach up to 9.4 years.
The issue of job automation is a complex, multifaceted problem. We don’t know exactly how technology will advance, or how it will affect wealth inequalities within the UK. However, based on current technologies, the risk of losing your job to automation for those lower-skilled workers far outweighs the risk for higher-skilled workers. The three jobs most at risk are waiters and waitresses, shelf-fillers, and elementary sales occupations; the three least at risk are medical practitioners, higher education teaching professionals and senior professionals of educational establishments.
Although AI will make some jobs obsolete, it will, of course, create new ones. However, these are jobs that will require specialisation. For example, when human supermarket cashiers are fully automated, the ex-cashier would be faced with having to learn new skills or adapt to an unstable reality where they are very much replaceable.
Those most at risk of automation are the ones already economically marginalised within the workforce. The ONS reports that 70.2% of the roles at high risk of automation are currently held by women, and, in addition, the age group most affected by automation are those aged between 20 and 24.
The risk of automation also varies on region. This is due to the jobs available, meaning areas with a greater volume of roles, in particular higher-skilled roles, are safer from the threat of automation. This increases the risks for those who are already both economically marginalised and economically disadvantaged within society.
Current AI research focuses on the importance of policy regulation to prevent exacerbating pre-existing equalities. Democratising access to technology is crucial, as is creating equal opportunities within technological advancement. Some other suggestions have included a universal basic income (which Finland trialled last year), a ‘robot tax’, a focus on lifelong learning and training, especially within the fields of computer science and STEM education. There is also the discussion of privilege within AI advantages, with Korinek and Stiglitz commenting that it is conceivable that the wealthiest of humans will be able to finance, dictate, or sway certain advancements.
These are all attempts to offset inequalities that this workplace revolution will cause. But will they be enough? The automation of certain job roles, or particular aspects of roles, will soon be unavoidable. To prevent worsening economic inequalities, we need a government who takes these issues seriously.
Facial recognition technology is one of the many areas in which AI is changing our society. Here Gwen Jones explores the implications it has for our civil liberties.
In the UK, we have become accustomed to the freedoms associated with a limited state. We vote as we please, we are free to speak our minds, and we maintain our innocence until proven guilty.
We’re also quick to point the finger at anyone who doesn’t abide by our model of a ‘free and fair’ society; China’s Communist Party, for example, is reprimanded constantly (and rightly) for its brand of surveillance-based authoritarianism.
It’s often said that while it’s easy to point out the flaws in others, it’s more difficult to notice the same flaws in oneself. And so it is for the insurgent use of surveillance tech in our Western, ‘liberal’ societies. Increasingly, facial recognition technology is taking on a prominent role in criminal justice, aiding the police force in both trials and arrests.
1984-style mass-surveillance is generally thought of as something reserved for far-off dictatorships. In reality, there are aspects of Orwell’s dystopian fiction which bear more than a little likeness to our own systems.
Of course, the renunciation of civil liberties in the name of national security is not new. Our political institutions are designed to keep us safe, all while attempting to uphold as great a degree of personal freedom as possible. It’s a finely-tuned balancing act and it’s not uncommon for the needle to swing out of line in one direction or the other. The expansion of police powers under Blair’s post-9/11 government, for example, was widely regarded as a lurch towards authoritarianism. But for a country that rejected ID cards and a national DNA database, facial recognition technology seems like a step firmly in the wrong direction.
A number of pilot schemes for facial recognition tech are currently underway in London, and problems are already starting to emerge.
During one trial in January, a man walks past a facial recognition camera and covers his face. Despite the MET having released a statement which said that “anyone who declines to be scanned will not necessarily be viewed as suspicious”, the man was stopped, forced to uncover his face and photographed anyway. On getting angry – and who can blame him? – he was given a £90 fine for anti-social behaviour. Other witness reports suggest he was not the only one.
To a world where refusing to comply with facial recognition requirements – which effectively turn people into walking ID cards – ‘Orwellian’ really isn’t an exaggeration. This isn’t even to mention the issues the technology has had thus far with catastrophic inaccuracy; for instance, algorithmic bias means people with darker skin are less likely to be identified correctly, resulting predominantly from the fact that most of the software’s developers are white.
According to the BBC, at least three opportunities to test how the technology deals with non-white faces have been missed over the last five years. The Home Office’s response to these criticisms has so far been to reiterate that “the technology continues to evolve” and that its effectiveness is “under constant review.”
It’s easy to imagine a world in which the technology has been perfected, improving search accuracy and reducing the margin of error to almost nil. But is this really a comforting thought? In some ways at least, a mass surveillance network that works infallibly is almost more terrifying than one that does not.
Generally speaking, the benefits of any technology that interferes with civil liberties in such a way must be at least proportional to the cost incurred to these liberties. How on earth this will be calculated remains to be seen; one can imagine the ease with which this ‘proportionality’ could be successively requalified. It might be, of course, that the government of the day maintains its commitment to the proper and restricted use of this technology. But who’s to say that future governments will do the same?
It’s no secret that power isn’t easily given up once awarded. This begs the question: once the infrastructure has been built and the technology created, will we still be able to change our minds further down the line?
We should think very carefully about whether or not this is the kind of future we want to live in, before embarking any further down what looks to be a one-way street.
The robots aren't coming - they're already here, says Renew's James Bryan.
“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come - namely, technological unemployment” - John Maynard Keynes
Based on the decline of manual labour in industries ranging from automotive manufacturing to agriculture, the vast media coverage regarding the rise of automation technology, along with information on the Bank of England site, which gives a probability of losing one's job to automation, it seems Keynes had a point.
Much of the modern drive towards automation is based on advances in the field of artificial intelligence and the creation of more powerful microprocessors. Jobs which were once considered to be the exclusive domain of humanity are now regularly performed by machines. The benefits of automation technology are applicable to virtually any field that one would care to name, it is no understatement to say that data science and better technology saves lives and nowhere is this truer than in medicine. However, this also raises an important question that seems almost philosophical in nature: do jobs exist to employ or to create the output of that work?
As the field of automatable work expands, this question becomes ever more urgent and important. It is clear that the products of automation have lead to greater prosperity and efficiency on a global scale, but research and development of these technologies is an area which requires more investment and greater attention. While the question of technological unemployment does not have a clear answer yet, it is clear that the creation of new policies to deal with the fallout of job-loss on a perhaps unprecedented scale is a vital part of the equation. If this is to be done with minimal negative consequences, those with the technical expertise to understand these issues in their true depth will need to be heavily involved in this process.
If there is a lesson to be learned from how evolving technologies have shaped our political and social landscape, it is that those currently in power have failed time and again to address the implications of the misapplication of data science and artificial intelligence by actors seeking to manipulate public perception and promote their own agenda. Deepfakes, extremely realistic faked footage created using machine learning techniques, aren’t coming; they’re already here. Cambridge Analytica existed and we may never know the true scale of how effective their large-scale social engineering campaigns were.
The reality is, the robots aren’t coming. They’ve come, and these are issues which aren’t going away.