By Patrick Goss
I’ve got a confession to make; whenever I see a security camera I feel inexplicably guilty. So it was with some concern that I read reports that the US government expects to be installing CCTV that will be able to flag up those acting suspiciously automatically.
Apparently, facial analysis software is becoming so accurate that the first cameras that can ‘recognise’ those people acting guilty or suspiciously and flag them up.
I already object to CCTV on an intellectual level. As I have written in the past, I feel that we are moving inexorably towards the kind of dystopian world described by George Orwell in 1984. And the United Kingdom in particular has been quick to embrace the notion that cameras deter crime.
Taking that up a notch to having machines determine if someone is acting in a suspicious way is, as far as I’m concerned, moving into genuinely scary territory.
My first concern is what exactly the repercussions of this are; when you are flagged as ‘acting suspiciously’ I would imagine that, at first at least, you are merely brought to the attention of an operator who can monitor what you are doing.
But at what point is it going to become okay to stop and search those people that a machine has flagged up? How do you differentiate between say, someone who is conducting an illicit affair (and presumably acting shiftily) and a terrorist?
Sociology students learn early on that people being observed modify their behaviours, and in a society with increasing numbers of armed police and raised security levels, many people are not going about their daily business in the carefree, innocent fashion that they perhaps would
It all comes down to the most pressing question of the 21st century so far. Is it okay to impinge on the civil liberties of the many to try to prevent the rise of terrorism?
Of course, the accuracy of the software is another worry. Would it be that tough to train people to act nonchalantly enough to cheat the computers? I think it’s fair to say that becoming reliant on machines to ‘read’ peoples motivations is opening up the potential for scaling back police presence and that in itself is a dangerous route to go down.
If the software does prove even a minor success then you are left with the problem that a machine picking out the guilty creates a massively dangerous precedent in terms of the potential to abuse the system. For a start, using the machine as justification, any person could potentially be flagged up for ‘acting suspiciously’ and find their privacy in question.
Databases of suspicious behaviour could be set up, and does someone who has ‘acted suspiciously’ in the past then find their future movements tracked as well — even if the machine was in error in the first case? Will we be informed if we have been tracked by these cameras or will we remain oblivious to the fact that our life is being recorded because a camera decided some facial tick was worthy of note?
I actually have a hard time accepting that the lesser evil of facial recognition software is truly necessary — although the inevitability of its widespread inception is becoming more and more evident.
Again the reliance on a system of comparing people to a ‘watch-list’ of suspects does not allow for those who have managed to fly under the radar — not to mention the problematic situation of monitoring those that have never been convicted, or in many cases even accused, of a crime.
Comparing people against a criminal database is one thing, but it is a very short hop to tracking everybody all of the time and building up an increasingly detailed database about each and every one of us.
Which brings us back to the old chestnut of ‘Why do I need to worry about his kind of thing if I haven’t done anything wrong?’ This is at the heart of the entire privacy debate and remains a vital discussion.
For me, the prospect of detailed government databases of our details and the minutiae of our life is a concern because those records remain regardless of whoever is in power.
Does my religion make much of a difference at the current time? No. But what if a government arrived in the future that DID consider religious views outside of their own beliefs to be a crime? By agreeing to lose privacy in the name of terrorism prevention, you are signing a chit of trust not just for this government, but every government going forward.
And that, for me, is a level of trust I just don’t have.