Facial recognition technology is one of the many areas in which AI is changing our society. Here Gwen Jones explores the implications it has for our civil liberties.
In the UK, we have become accustomed to the freedoms associated with a limited state. We vote as we please, we are free to speak our minds, and we maintain our innocence until proven guilty.
We’re also quick to point the finger at anyone who doesn’t abide by our model of a ‘free and fair’ society; China’s Communist Party, for example, is reprimanded constantly (and rightly) for its brand of surveillance-based authoritarianism.
It’s often said that while it’s easy to point out the flaws in others, it’s more difficult to notice the same flaws in oneself. And so it is for the insurgent use of surveillance tech in our Western, ‘liberal’ societies. Increasingly, facial recognition technology is taking on a prominent role in criminal justice, aiding the police force in both trials and arrests.
1984-style mass-surveillance is generally thought of as something reserved for far-off dictatorships. In reality, there are aspects of Orwell’s dystopian fiction which bear more than a little likeness to our own systems.
Of course, the renunciation of civil liberties in the name of national security is not new. Our political institutions are designed to keep us safe, all while attempting to uphold as great a degree of personal freedom as possible. It’s a finely-tuned balancing act and it’s not uncommon for the needle to swing out of line in one direction or the other. The expansion of police powers under Blair’s post-9/11 government, for example, was widely regarded as a lurch towards authoritarianism. But for a country that rejected ID cards and a national DNA database, facial recognition technology seems like a step firmly in the wrong direction.
A number of pilot schemes for facial recognition tech are currently underway in London, and problems are already starting to emerge.
During one trial in January, a man walks past a facial recognition camera and covers his face. Despite the MET having released a statement which said that “anyone who declines to be scanned will not necessarily be viewed as suspicious”, the man was stopped, forced to uncover his face and photographed anyway. On getting angry – and who can blame him? – he was given a £90 fine for anti-social behaviour. Other witness reports suggest he was not the only one.
To a world where refusing to comply with facial recognition requirements – which effectively turn people into walking ID cards – ‘Orwellian’ really isn’t an exaggeration. This isn’t even to mention the issues the technology has had thus far with catastrophic inaccuracy; for instance, algorithmic bias means people with darker skin are less likely to be identified correctly, resulting predominantly from the fact that most of the software’s developers are white.
According to the BBC, at least three opportunities to test how the technology deals with non-white faces have been missed over the last five years. The Home Office’s response to these criticisms has so far been to reiterate that “the technology continues to evolve” and that its effectiveness is “under constant review.”
It’s easy to imagine a world in which the technology has been perfected, improving search accuracy and reducing the margin of error to almost nil. But is this really a comforting thought? In some ways at least, a mass surveillance network that works infallibly is almost more terrifying than one that does not.
Generally speaking, the benefits of any technology that interferes with civil liberties in such a way must be at least proportional to the cost incurred to these liberties. How on earth this will be calculated remains to be seen; one can imagine the ease with which this ‘proportionality’ could be successively requalified. It might be, of course, that the government of the day maintains its commitment to the proper and restricted use of this technology. But who’s to say that future governments will do the same?
It’s no secret that power isn’t easily given up once awarded. This begs the question: once the infrastructure has been built and the technology created, will we still be able to change our minds further down the line?
We should think very carefully about whether or not this is the kind of future we want to live in, before embarking any further down what looks to be a one-way street.
The robots aren't coming - they're already here, says Renew's James Bryan.
“We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come - namely, technological unemployment” - John Maynard Keynes
Based on the decline of manual labour in industries ranging from automotive manufacturing to agriculture, the vast media coverage regarding the rise of automation technology, along with information on the Bank of England site, which gives a probability of losing one's job to automation, it seems Keynes had a point.
Much of the modern drive towards automation is based on advances in the field of artificial intelligence and the creation of more powerful microprocessors. Jobs which were once considered to be the exclusive domain of humanity are now regularly performed by machines. The benefits of automation technology are applicable to virtually any field that one would care to name, it is no understatement to say that data science and better technology saves lives and nowhere is this truer than in medicine. However, this also raises an important question that seems almost philosophical in nature: do jobs exist to employ or to create the output of that work?
As the field of automatable work expands, this question becomes ever more urgent and important. It is clear that the products of automation have lead to greater prosperity and efficiency on a global scale, but research and development of these technologies is an area which requires more investment and greater attention. While the question of technological unemployment does not have a clear answer yet, it is clear that the creation of new policies to deal with the fallout of job-loss on a perhaps unprecedented scale is a vital part of the equation. If this is to be done with minimal negative consequences, those with the technical expertise to understand these issues in their true depth will need to be heavily involved in this process.
If there is a lesson to be learned from how evolving technologies have shaped our political and social landscape, it is that those currently in power have failed time and again to address the implications of the misapplication of data science and artificial intelligence by actors seeking to manipulate public perception and promote their own agenda. Deepfakes, extremely realistic faked footage created using machine learning techniques, aren’t coming; they’re already here. Cambridge Analytica existed and we may never know the true scale of how effective their large-scale social engineering campaigns were.
The reality is, the robots aren’t coming. They’ve come, and these are issues which aren’t going away.
We are currently caught up in one of the largest and most momentous revolutions in human history – whether we know it yet or not. We’re living through perhaps the most fundamental transformation of our environment mankind has ever seen.
The war is not being fought with rifles, bayonets or nuclear force – this time around, the weapons of choice are big data and smart technology. Quieter maybe, but more insidious that its predecessors; the information revolution is changing the way we shop, vote, govern and even think.
We’re quickly waking up to the fact that pivotal changes are underway. But, as tends to be the case with such periods of upheaval, it’s almost impossible to say where they’re headed until they get there. With the conclusion of the digital revolution still a very long way off, we won’t be granted the luxury of hindsight as a means of understanding this change. It’s not for want of trying either – academia across disciplines is riddled with attempts to explain our new and interconnected world.
In the face of such uncertainty, we have a tendency to revert to what we know - ideas that have helped to explain the past but are no longer helpful in trying to understand the future. We see this all the time in our politics, but it often leaves us staunchly on the back foot and ill-prepared for challenges to come.
In her new book, The Age of Surveillance Capitalism, Shoshana Zuboff puts forward a welcome new attempt to describe the effects of digitisation. The focus is not necessarily the workings of the Facebook/Google/Amazon clan themselves, but rather, on the ways in which they are shaping the wider context of global capitalism as we know it. Zuboff describes the new evolution of capitalism that has emerged from big tech as ‘Surveillance Capitalism’ – a system that both relies upon and utilises big data to achieve its ends.
So-called surveillance capitalists – online service providers in their myriad forms – are able to monitor the behaviour of their user bases with a remarkable degree of detail and accuracy. While many of us feel comfortably veiled in algorithmic obscurity, in reality, tech giants are covertly collecting hundreds of thousands of bytes of data each day; data which can be fed back into improving algorithms and making predictions on the behaviour of their users. Much of this happens without explicit or obvious consent.
At best, these processes contribute to service improvement, creating more user friendly interfaces and intuitive design. At worst, the acquisition of behavioural data is used to develop highly sophisticated machine intelligence capable of predicting what you will do now, soon and later. As these techniques improve, usership grows – a feedback loop which, without regulation, could continue indefinitely. Prediction techniques and their ability to influence human behaviour are already having huge implications for the political and economic landscape, creating and shaping new markets and voting behaviours at the whim of the corporations that control them.
Digital hegemony is already well-established and will become yet more deeply entrenched as data is used to facilitate its own growth. It’s becoming increasingly important to re-examine the way we look at the wider system as the power dynamics within it begin to shift.
Traditionally, economic theory has relied on the assumption that market forces are dynamic, unpredictable and ultimately unknowable. The State should refrain from attempting to regulate or constrain markets on this basis, just as agents in a market-place are free to compete with each other in mutual ignorance. But with the rise of big tech, these fundamental principles have changed. It is essential that our assumptions about markets change with them.
Global tech firms now know too much to be granted the same licence as other free market actors - after all, under their influence, markets are no longer truly free. There’s no easy fix, either - the acquisition of user data is so deeply inherent in the operations of online service providers that self-regulation would be almost impossible.
Rather, it may be time to rethink our unquestioning faith in free-market economics - if for no other reason than the fact that markets are demonstratively becoming less and less free. The governing principles of the 20th century are becoming increasingly less relevant as time progresses, and less able to cope with this rapid, systemic change.
The absence of state regulation risks the rise of insurmountable monopolies that wield too great an influence over our markets, our behaviour and our democracy. Legislation against this will no doubt be hugely challenging, but the consequences of shying away from the problem will be more challenging still.