Clearview the facial recognition company, is it another tool of tyranny for governments

0
302

Clearview AI is a start-up founded in 2017. It provides facial recognition software to law enforcement authorities in the US and Canada. The software, used by over 600 authorities, identifies suspects by scanning images on the internet and comparing photos with their online database of over 3 billion images. This dwarfs the FBI’s database of 641 million images.

According to their website, Clearview is “not a surveillance system” and does not and cannot search any “private or protected info”.

Clearview’s claims that all their images have been acquired from the “open web”. However, there are concerns that the technology is an invasion of privacy. There is also worry that the more the technology is developed, the more potentially dangerous it could become. The company has been accused of infringing civil liberties and are involved in a number of ongoing lawsuits.

Many legal authorities in the US are sceptical of Clearview and whilst there is no federal law regulating it, many oppose its use. Recently, the attorney general of New Jersey barred the use of the “chilling and unregulated” software in the state. The city of San Francisco has also banned its law enforcement from using it.

Politicians have been vocally critical of the software, Senator Ron Wyden tweeted that the software reminded him of an episode of Black Mirror. Moreover, Twitter sent the company a cease and desist letter ordering them to stop using photos from their site.

Clearview has also been accused of exaggerating and fabricating the technology’s achievements. The company claims the software helped the NYPD identify a suspect plotting terrorist attacks in New York. However the NYPD issued a statement to say they did not use Clearview technology, nor do they have any kind of institutional relationship with the company, also saying that “sketchy cops” would use it.

So, is Clearview an invaluable resource that could revolutionise the way law enforcement agencies identify suspects? Or is it a dangerous infringement on privacy that imposes potentially dangerous surveillance on populations? This will be the point of debate for the editors this week.

Written by POI Correspondent, Emer Kelly

Causing the problems it claims to solve – Conservative Article

Over the last 10 years, technology and access to information have grown faster than we could have imagined. We are now living in a post-privacy era. Masses of personal information has been voluntarily posted up on the internet.

From this, companies such as Clearview have been able to utilise online data and create facial recognition software. Clearview’s mission is to act as a law enforcement tool. To help easily and quickly track down criminals.

However, I am cautious of a non-governmental, profit-driven company that claims to be using personal information for surveillance.

From their website, Clearview is claiming to have a solution for many issues faced by crime departments across the world. They have shown how facial recognition technology cuts down the hours of work police have to put into identifying suspects. They make a vague statement about how their technology has already helped to track down “hundreds of at-large criminals, including pedophiles, terrorists and sex traffickers”.

Clearview has taken credit for the use of its system in an NYPD terrorist case. The suspect’s photo was allegedly “searched in Clear-view”. Its software linked the image to an online profile with the man’s name in less than five seconds.

However, a NYPD spokesperson has strongly denied these claims. In fact, the NYPD  has claimed, “there is no institutional relationship” with Clearview.

Of course, it is common for startup companies to make exaggerated claims in order to boost their business. However, the stakes are significantly higher for a company which is building tools directly marketed at Law Enforcement agencies, with the aim of identifying criminal suspects.

Technological innovations have made essential advances in law enforcement. However, due to the fact that Clearview is a privately run, profit-driven company, they will inevitably continue to develop and sell products to any market willing to purchase. This won’t just be law enforcement agencies – Clearview investors have already stated that their software will become available to the public in the near future.

Making this level of facial recognition software available to the public opens up a whole new level of privacy invasion. It would end our ability to remain private in public and would facilitate dangerous behaviour towards certain individuals.

For any stalker, rapist, murderer, child molester or terrorist, Clearview AI could make finding the personal details, name and address of any targeted individual/s quick and easy.

Even within Law Enforcement or Government agencies, mass surveillance and facial recognition could be abused. There have been many examples of police using databases to help their friends or themselves stalk women, threaten motorists after traffic altercations, and track estranged spouses.

So imagine the kind of corruption that could occur with this new level of technology. The weaponisation possibilities are endless. There are currently no working laws managing legal rights when it comes to facial recognition, and I believe a database as large as Clearview will put us all at risk.

Clearview states that it will help to stop is terrorism. Over the past decade, terrorists have killed on average 21,000 people per year. Clearview technology is unlikely to reduce this, as suicide attackers are clearly not deterred by increased surveillance or facial recognition, and would possibly even be attracted by the idea of wider coverage and identification. If someone wants to carry out a terrorist attack they will find a way of doing so. So why give up our privacy if our protection can’t even be guaranteed?

Legislating for Clearview could be compared to gun legislation in America – claiming it is there to protect, but in reality creating danger and causing as many problems as it solves. However, Clearview is already being used by multiple US agencies, and I believe it is only a matter of time before it is hacked, compromising public safety.

Until many of Clearview’s unsolvable moral and technical problems are solved, this technology should be banned from the market.

Written by Conservative Editor, Eleanor Roberts

Point of Information

It’s been a while, but I agree! – a Liberal response

The arguments outlined above for the banning of Clearview-esque technologies are the only ones defendable. Ms Roberts is completely right. We cannot allow this type of potentially detrimental technology until we have a clear (I can’t avoid the word!) idea of its implications and how we would legislate against them.

I am really glad to see Ms Roberts, as the Conservative editor, taking a strong stance. It would be far too easy to say that this is a private company and we should be living in a free market where technology can prosper. So I commend her for taking the hard line here. If only we could see governments across the world sitting up and taking more serious action.

I fear, if this is looked into further, we will see much deeper corruption and collusion between companies like Clearview and law enforcement and other public institutions. We can only hope the media and public pressure will ensure that any devious connections are scrutinised and shut down.

Written by Liberal Editor, Olivia Margaroli

We love to see it – a Labour response

When I found out about the topic in question for this week’s article I was interested to see what take my conservative peer would present. I imagined a Friedmanist call to follow the free market, wherever it might take us. Or maybe a more Bush-era proclamation that more security is always better, no matter the societal costs.

So I find it slightly more boring but far more encouraging to see the position that Miss. Roberts has laid out today. Limits must be placed on the free market in order to protect the people from encroachment on their rights.

However, I would like to see her expand on her last point as I found the comparison between Clearview and gun control legislation in the US confusing. I do not see how legislating against ClearView would cause problems the same way legislating against gun control would be. Whilst Americans have clear rights to bear arms making legislation difficult, AI legislation has no such constitution blocks; in fact, the right to privacy is alluded to in the 4th Amendment protecting peoples security in “persons, houses, papers, and effects”.

But all that is quibbling over what is a strong article with the right mindset.

Written by Labour Editor, Daniel Orchard

The answer is clear – Liberal Article

Facebook privacy settings are purposefully confusing and misleading. ‘Do you want search engines outside of Facebook linking to your profile?’. To be perfectly honest, my response was ‘um, not really sure what that means’. Unfortunately, that’s not an option!

If you search my name in google then it’s likely that my photos and Facebook profile will show up in your search. What’s the problem with that?

However, companies like Clearview are using photos that come up in these search engines for their database. According to the founder, if you answer ‘no’ to the question above, your Facebook photos won’t be included in the database.

Except, if your profile and its data have already been scraped by Clearview, then its too late. It makes no difference if you take the photos down or not. Moreover, it doesn’t make a difference that this scraping violates the T&Cs for many social media sites. In an interview, Clearview founder Hoan Ton-That responded to this concern with “A lot of people are doing it.”

Peter Thiel, a key investor in Clearview, sits on the board for Facebook. He is also the co-founder of Paypal. Venmo, the payment app owned by Paypal makes payments public by default.

Clearview has mined data from Venmo. The company has responded by saying “Scraping Venmo is a violation of our terms of service and we actively work to limit and block activity that violates these policies”. You can see the irony. Disapproving public statements like these aim to sweep problems under the mat. They don’t show the company is acting on the problems.

The information that I have given above is the tip of the ice-berg. I found this in just a few hours, I’m sure there are scarier connections to Clearview out there.

It is unclear as to who is using Clearview and how it is being used. In America, it seems a number of police forces are using it. Whereas in the UK, NEC, another facial recognition AI technology company is reportedly being used by the police.

The UK police have also announced that they will use cameras on streets to spot criminals, a level of surveillance not widely used outside of China.

The wider problem of facial recognition technology, not just Clearview, must be addressed. There is no monopoly on AI, its just maths. There will be many companies doing this and developing even more sophisticated technology than Clearview.

Regulation can not keep up with the rapid development of technology.

Our right to a private life is lost if the police use these technologies. Most of us could live with this if we thought it would stop things like acts of terror. However, in the hands of criminal organisations, we will be in danger.

You walk down the street, unknowingly a photo has been taken of you. Instantaneously your address is found, your workplace, where your children go to school, what you buy. The list goes on.

All this information would be available potentially, by taking just one photo. Beyond the problems of individual autonomy, this would be downright dangerous to your life and well-being. As it stands the potentially crippling abuse of this technology outweighs the benefits that could be had.

Facial recognition technology and databases like Clearview must be banned. Regulation, at its current level, is not able to sufficiently protect people from its capabilities. If one day regulation is sophisticated enough, the ban can be lifted.

Written by Liberal Editor, Olivia Margaroli

Point of Information

What private life? – a Conservative response

This article does address many important issues surrounding Clearview, and I agree with the advocacy of a complete ban.

However, I would argue that privacy is not far off being dead already. This isn’t just as a result of social media. We all subscribe to newsletters, go online shopping, take quizzes, enter contests, all of which means that you are making your personal information public, and traceable. Even by loading a website you are leaving a trail of personal information behind you.

Ms Margaroli also states that the level of surveillance being adopted in the UK is one that is very uncommon outside of China. This is untrue. According to a report from the Carnegie Endowment for International Peace at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes, and this number is rapidly increasing.

Surveillance is already playing a large role in law enforcement globally, and our privacy is already at risk. Nevertheless, I completely agree with the main argument in this article, however, I think it needs to consider the post-privacy context in which we are already living in today.

Written by Conservative Editor, Eleanor Roberts

When could a ban be dropped? – a Labour response

I am still waiting for myself and Ms Margaroli to fundamentally disagree on a point. Which makes me worry. Am I a secret liberal? What I find more likely is that Ms Maragaroli is actually a leftie who is not quite ready to admit it. Maybe someday soon we will have to find a new liberal editor.

Whilst I agree with most points the article raised, I would be interested to see what level of regulation would be required for Ms Margaroli to allow a ban to be lifted on facial recognition software.

I myself cannot think of a way to stop the technology from impeding our autonomy and right to privacy, whilst also remaining profitable enough for private companies to develop and sell.

Even if the technology was only in state hands, the possibility of it being abused by police and party personnel or even the entire database being leaked is too high. If facial recognition of this sort is to be banned, it is probably best that it stays that way.

Written by Labour Editor, Daniel Orchard

Dodgy company, dodgy software –  Labour Article

First I will address Clearview AI, the specific firm and software. Before touching shortly on the general usage of facial recognition software in law enforcement.

Whilst I am not in the U.S. and thus not affected by this software, I find it deeply worrying. Therefore, I applaud New Jersey’s Attorney General barring police from using it. I hope other states and the federal government take similar positions and launch inquiries into this software and any forthcoming or present facsimiles.

My dislike of this company and software is multifaceted. In short succession and in no particular order: Clearview would not disclose to the NYPD who had access to uploaded images, there is a high potential for abusing the software by police officers (Police in America have a domestic abuse rate two to four times higher than the general population), they have scraped the photos from hundreds of sources without permission from the companies and more importantly from the people.

In Clearview’s promotional material they have allegedly lied about the involvement of their software in the apprehension of a bombing suspect.

Moreover, the companies’ founder Hoan Ton-That, has been involved in a case of privacy invasion in the past. One of his previous ventures, ViddyHo, was a phishing scam “that tricked users into sharing access to their Gmail accounts”.

He has also been seen with Holocaust denier Chuck C. Johnson. When together they were photographed flashing the OK symbol, which has been used a dog whistle by far-right activists. For a bit more information about dog whistles and their use, see this video by Knowing Better.

Concerns have been raised about the prospect of the app being used by the public, intentionally or otherwise. Clearview has responded to these concerns saying that the app is not and will not be available to the public in any form. However, I believe that if law enforcement across the country is not willing or able to work with them, then it is likely that taking the app public is the only way for the company to stay afloat.

If that situation does arise it will be hard to imagine a company not taking the easy money available, and instead choosing to close down their operations. So if legislators do ban law enforcement from using this software, they will likely have to go further and ban civilian use as well, it being the likely outcome of a police ban.

More broadly, I am against any use of facial recognition software in law enforcement. I find its development to be highly troubling and advocate for a ban. Or at the very least a moratorium awaiting heavy restrictions and regulation.

One reason for this is that facial recognition software has been found to be racial and sexual biased. Amazon’s Rekognition software has a far worse identification for people with darker skin tones and all women. This is not the only time that supposedly neutral AI have been shown to have biases in them. If such software was used and relied on by law enforcement, it could lead to widespread discrimination.

At home, The Metropolitan Police have started to roll out facial recognition software. In Romford, a police van appeared on the high street and scanned the faces of all those who walked past. When a man decided he did not want his face scanned by pulling up his jumper to cover his face, he was forcibly pulled to one side, scanned and issued a £90 fine.

This was despite the police declaring “anyone who declines to be scanned will not necessarily be viewed as suspicious”. Such acts could become far more commonplace in the future as coverage expands.

As someone who is against a surveillance state, I believe we must fight its encroachment in all its forms. One of these is facial recognition software. People should not have to worry about their face being scanned or scrapped just from walking down the high street, putting a picture on their LinkedIn, or sharing photos with friends. Despite its possible benefits, this software should be banned.

Written by Labour Editor, Daniel Orchard

Point of Information

All use is concerning – a Liberal response

I agree with Mr Orchard’s conclusion that the software should be banned. But, I think the way the point has been reached is slightly convoluted.

It is the case that police and state use of facial recognition technology is concerning. However, I believe public use is far more concerning. Conspiracies, some probably quite well-founded, as to the motives of state use are indeed worrying, but should not be our primary worry.

The access to this level of sophisticated facial recognition technology by criminal organisations is deeply unsettling. If criminal organisations are able to get hold of this technology (legally or otherwise) we will be living in perpetual danger. We can hope that if the government used technology it would at least be attempted to be used to benefit us and increase our safety. The same cannot be said of criminals (or even private firms!).

The points raised regarding AIs systematic biases are extremely important. Not just in this area, but across all of AI technology. I believe, and I think Mr Orchard would agree, that if we cannot create AI that is completely unbiased, it should not be used.

Increasingly we are seeing AI technology do menial tasks such as sifting through CVs. If this is done with a systematic bias then the implications for the future will be detrimental.

Written by Liberal Editor, Olivia Margaroli

Against all surveillance? – a Conservative response

I agree with the majority of the points in this article. Facial recognition software such as Clearview claims to have 100% accuracy. Yet, racial discrimination has proven to be an issue.

I understand that mass surveillance has huge ethical and moral complications. However, I take issue with fighting against all types of surveillance, as Mr Orchard suggests we do.

For example, noncommunicable disease surveillance uses our personal data. However, this hasn’t led to a ‘surveillance state’. It has simply allowed health organisations to be able to identify and control noncommunicable diseases.

So this just shows that we shouldn’t necessarily be against all forms of data collection and mass surveillance. It can be imperative for public health improvement.

Written by Conservative Editor, Eleanor Roberts

Leave a Reply