Renew's Brogan Meaney asks: do advancements in AI look set to further worsen pre-existing economic inequalities?
We’re no strangers to the chilling cries of robots stealing our jobs. We’re in the midst of a technological revolution, and like during other industry-transforming revolutions that came before, we’re on the brink of a dramatic transition within the workforce -automation.
The fast-paced technological advances within the field of AI have already brought automation to the workplace, with 1.5 million jobs in the UK currently at risk of it. The most commonplace example of automation can be seen inside supermarkets across the country: automated self-checkout tills. As these have already demonstrated, advances within the field of AI will have a dramatic impact on many industries. Although these can mean a more streamlined and efficient workforce, there are other consequences.
For example, the economists Anton Korinek and Joseph E. Stiglitz have argued that economic inequality is one of the main challenges we face in the advancement of workforce technology innovation.
We live in a world where the richest one percent own half of the world's wealth. This results in disparities in life expectancies, seen not only globally, but also at home: in England, the gap in life expectancy between the wealthiest and the most deprived areas can reach up to 9.4 years.
The issue of job automation is a complex, multifaceted problem. We don’t know exactly how technology will advance, or how it will affect wealth inequalities within the UK. However, based on current technologies, the risk of losing your job to automation for those lower-skilled workers far outweighs the risk for higher-skilled workers. The three jobs most at risk are waiters and waitresses, shelf-fillers, and elementary sales occupations; the three least at risk are medical practitioners, higher education teaching professionals and senior professionals of educational establishments.
Although AI will make some jobs obsolete, it will, of course, create new ones. However, these are jobs that will require specialisation. For example, when human supermarket cashiers are fully automated, the ex-cashier would be faced with having to learn new skills or adapt to an unstable reality where they are very much replaceable.
Those most at risk of automation are the ones already economically marginalised within the workforce. The ONS reports that 70.2% of the roles at high risk of automation are currently held by women, and, in addition, the age group most affected by automation are those aged between 20 and 24.
The risk of automation also varies on region. This is due to the jobs available, meaning areas with a greater volume of roles, in particular higher-skilled roles, are safer from the threat of automation. This increases the risks for those who are already both economically marginalised and economically disadvantaged within society.
Current AI research focuses on the importance of policy regulation to prevent exacerbating pre-existing equalities. Democratising access to technology is crucial, as is creating equal opportunities within technological advancement. Some other suggestions have included a universal basic income (which Finland trialled last year), a ‘robot tax’, a focus on lifelong learning and training, especially within the fields of computer science and STEM education. There is also the discussion of privilege within AI advantages, with Korinek and Stiglitz commenting that it is conceivable that the wealthiest of humans will be able to finance, dictate, or sway certain advancements.
These are all attempts to offset inequalities that this workplace revolution will cause. But will they be enough? The automation of certain job roles, or particular aspects of roles, will soon be unavoidable. To prevent worsening economic inequalities, we need a government who takes these issues seriously.
Capitalism is “very much part of the solution” to the climate crisis, Bank of England governor Mark Carney said in an interview yesterday. Perhaps he's right, says Gwen Jones in this Renew Long Read.
For a long time, those leading the charge against climate change have branded capitalism – responsible for the oil economy and prioritisation of instant growth over sustainability – as the planet’s greatest adversary. The Green Party, and their contemporaries Extinction Rebellion, have rallied against free markets as working in opposition to their cause.
And many experts agree. The line? Capitalism and environmentalism are mutually exclusive, and the effective mitigation of climate change will necessitate the end of capitalism in favour of a more sustainable economic system.
In response to Carney’s Channel 4 appearance, an Extinction Rebellion spokesperson told the Guardian, “We are destroying our planet, and business as usual is not going to save us. We must question any system that has led us to this path of mass extinction and look to more sustainable economic models that are not based on resource depletion and increasing emissions.”
But Carney is confident in his convictions. According to the economist, who has previously worked for Goldman Sachs, the opportunities associated with tackling climate change are growing rapidly - and so are the costs of failing to do so. In a system predicated on the exploitation of opportunity and an aversion to risk, capital will move naturally in the direction of sustainability. In his strident defence of capitalism as a solution to the climate crisis, Carney argues that companies who continue to ignore the issue “will go bankrupt without question.”
Is he right? Like many things in life, the answer is not cut and dry. Capitalism won’t solve the planet’s problems, at least if it’s acting alone.
Being a climate capitalist
Taking this leap from traditional to sustainable business practices requires sizeable investment, and, for the meantime anyway, the majority of green energy sources are still more expensive than their conventional counterparts. This hurts a company’s bottom line and means that prices may have to rise in order to maintain profits.
In a competitive market, this has some important implications; businesses are forced to make a choice, between refining practices at their own expense or sticking to their traditional process (even if this means running the risk of worsening climate change). In an ideal world, all polluting corporations decide to cut their emissions simultaneously in the name of the climate. This comes at a cost to each business, but gives no business a comparative advantage over any other, meaning all maintain their share of the market.
However, no business can be sure of what the others will do. It’s a dog-eat-dog world after all, and they have no reason to trust each other. If Business A decides to cut its emissions but Business B does not, Business B can take advantage of lower operating costs and price A out of the market. The same is also true the other way around.
Carney is right – the economic costs of ignoring climate change are rising, and the costs relative to opportunities to mitigate it opportunities of mitigation are shrinking, fast. But money talks, and until we reach a point where the costs outweigh the benefits in the short to medium term, no business will be willing to blink first.
The solution most likely lies with a radical rethink of the role the state plays in creating markets and driving innovation. In the liberal economic tradition, the state is portrayed as a clumsy, bureaucratic obstruction to the actions of the dynamic free market. This is as damaging as it is misguided. The state, with its plentiful resources and capacity to take risks (that private actors often cannot and will not take), is able to defy common barriers to innovation.
Historically, governments have played a critical role in funding some of the most influential developments in tech to date. The internet, GPS, voice recognition, biotech and countless pharmaceutical breakthroughs have come out of US government agencies DARPA and NIH respectively. The Green Revolution is next – ARPA-E, the US government body responsible for energy production and innovation – is already having an impact.
Market forces are notoriously unreliable when it comes to advancing the good of society. Markets don’t have morals, but states are unique in their ability to create new markets and shape existing ones towards a socially productive end. Financial viability is key to private action – the state can incentivise innovation in desirable areas through grants and subsidies, the likes of which benefitted Apple in the early stages of its development. Governments should also be prepared to take a lead in certain areas, investing in high risk, high return strategies to secure this new role within the economy.
This is not to undermine the value of private actors in driving innovation and wealth creation. But markets are not infallible and failure is commonplace. Up until now, the advancing climate crisis driven by the quest for growth has been an excruciating example of this. The right conditions must be set before the private sphere can drive us forward in a direction we actually want to be travelling in.
Facial recognition technology is one of the many areas in which AI is changing our society. Here Gwen Jones explores the implications it has for our civil liberties.
In the UK, we have become accustomed to the freedoms associated with a limited state. We vote as we please, we are free to speak our minds, and we maintain our innocence until proven guilty.
We’re also quick to point the finger at anyone who doesn’t abide by our model of a ‘free and fair’ society; China’s Communist Party, for example, is reprimanded constantly (and rightly) for its brand of surveillance-based authoritarianism.
It’s often said that while it’s easy to point out the flaws in others, it’s more difficult to notice the same flaws in oneself. And so it is for the insurgent use of surveillance tech in our Western, ‘liberal’ societies. Increasingly, facial recognition technology is taking on a prominent role in criminal justice, aiding the police force in both trials and arrests.
1984-style mass-surveillance is generally thought of as something reserved for far-off dictatorships. In reality, there are aspects of Orwell’s dystopian fiction which bear more than a little likeness to our own systems.
Of course, the renunciation of civil liberties in the name of national security is not new. Our political institutions are designed to keep us safe, all while attempting to uphold as great a degree of personal freedom as possible. It’s a finely-tuned balancing act and it’s not uncommon for the needle to swing out of line in one direction or the other. The expansion of police powers under Blair’s post-9/11 government, for example, was widely regarded as a lurch towards authoritarianism. But for a country that rejected ID cards and a national DNA database, facial recognition technology seems like a step firmly in the wrong direction.
A number of pilot schemes for facial recognition tech are currently underway in London, and problems are already starting to emerge.
During one trial in January, a man walks past a facial recognition camera and covers his face. Despite the MET having released a statement which said that “anyone who declines to be scanned will not necessarily be viewed as suspicious”, the man was stopped, forced to uncover his face and photographed anyway. On getting angry – and who can blame him? – he was given a £90 fine for anti-social behaviour. Other witness reports suggest he was not the only one.
To a world where refusing to comply with facial recognition requirements – which effectively turn people into walking ID cards – ‘Orwellian’ really isn’t an exaggeration. This isn’t even to mention the issues the technology has had thus far with catastrophic inaccuracy; for instance, algorithmic bias means people with darker skin are less likely to be identified correctly, resulting predominantly from the fact that most of the software’s developers are white.
According to the BBC, at least three opportunities to test how the technology deals with non-white faces have been missed over the last five years. The Home Office’s response to these criticisms has so far been to reiterate that “the technology continues to evolve” and that its effectiveness is “under constant review.”
It’s easy to imagine a world in which the technology has been perfected, improving search accuracy and reducing the margin of error to almost nil. But is this really a comforting thought? In some ways at least, a mass surveillance network that works infallibly is almost more terrifying than one that does not.
Generally speaking, the benefits of any technology that interferes with civil liberties in such a way must be at least proportional to the cost incurred to these liberties. How on earth this will be calculated remains to be seen; one can imagine the ease with which this ‘proportionality’ could be successively requalified. It might be, of course, that the government of the day maintains its commitment to the proper and restricted use of this technology. But who’s to say that future governments will do the same?
It’s no secret that power isn’t easily given up once awarded. This begs the question: once the infrastructure has been built and the technology created, will we still be able to change our minds further down the line?
We should think very carefully about whether or not this is the kind of future we want to live in, before embarking any further down what looks to be a one-way street.
The robots aren't coming - they're already here, says Renew's James Bryan.
“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come - namely, technological unemployment” - John Maynard Keynes
Based on the decline of manual labour in industries ranging from automotive manufacturing to agriculture, the vast media coverage regarding the rise of automation technology, along with information on the Bank of England site, which gives a probability of losing one's job to automation, it seems Keynes had a point.
Much of the modern drive towards automation is based on advances in the field of artificial intelligence and the creation of more powerful microprocessors. Jobs which were once considered to be the exclusive domain of humanity are now regularly performed by machines. The benefits of automation technology are applicable to virtually any field that one would care to name, it is no understatement to say that data science and better technology saves lives and nowhere is this truer than in medicine. However, this also raises an important question that seems almost philosophical in nature: do jobs exist to employ or to create the output of that work?
As the field of automatable work expands, this question becomes ever more urgent and important. It is clear that the products of automation have lead to greater prosperity and efficiency on a global scale, but research and development of these technologies is an area which requires more investment and greater attention. While the question of technological unemployment does not have a clear answer yet, it is clear that the creation of new policies to deal with the fallout of job-loss on a perhaps unprecedented scale is a vital part of the equation. If this is to be done with minimal negative consequences, those with the technical expertise to understand these issues in their true depth will need to be heavily involved in this process.
If there is a lesson to be learned from how evolving technologies have shaped our political and social landscape, it is that those currently in power have failed time and again to address the implications of the misapplication of data science and artificial intelligence by actors seeking to manipulate public perception and promote their own agenda. Deepfakes, extremely realistic faked footage created using machine learning techniques, aren’t coming; they’re already here. Cambridge Analytica existed and we may never know the true scale of how effective their large-scale social engineering campaigns were.
The reality is, the robots aren’t coming. They’ve come, and these are issues which aren’t going away.