Clearview AI has been hit with a new sanction for violating European privacy rules.
The Athens-based Greek data protection authority has fined the controversial facial recognition company €20 million and banned it from collecting and processing the personal data of people living in Greece. It has also ordered it to delete all data on Greek citizens it has already collected.
Since late last year, national data protection authorities in the UK, Italy and France have also made similar decisions penalizing Clearview – effectively freezing its ability to sell its services in their markets, as local customers risk themselves being fined.
The US-based company rose to fame scraping selfies from the internet to build an algorithmic identity-matching commercial service targeting law enforcement and others, including private sector entities.
Last year, privacy regulators in Canada and Australia also concluded that Clearview’s operations violate local laws — in previous blows to its ability to scale internationally.
More recently, in May, Clearview agreed to major restrictions on its services domestically, within the US, in exchange for the settlement of a 2020 lawsuit from the American Civil Liberties Union (ACLU), which had accused it of violating of Illinois state law that prohibits the unauthorized use of individuals’ biometric data.
The European Union’s data protection framework, the General Data Protection Regulation (GDPR), sets an equally high bar for the legal use of biometrics to identify individuals — a standard that extends across the bloc, but also some non-member states (including the UK); so in total about 30 countries.
Under the GDPR, such a sensitive personal data purpose (ie facial recognition for an ID matching service) would require at least explicit consent from the data subjects to process their biometric data.
Still, it’s clear that Clearview hasn’t gotten the consent of the billions of people (and probably millions of Europeans) whose selfies it has been sneaking from social media platforms and other online sources to train facial recognition AIs, reusing people’s data for a privacy purpose. hostile target. So the growing array of GDPR sanctions hitting it in Europe is not surprising. And more penalties may follow.
In its 23 pages decision, the Greek DPA said Clearview had violated the legality and transparency principles of the GDPR and found violations of Articles 5(1)a, 6 and 9; as well as breaches of obligations under Articles 12, 14, 15 and 27.
The Greek DPA’s decision follows a May 2021 complaint filed by a local human rights organization, Gay Digitaliswho has hailed the win in a press release — saying the €20 million fine sends a “strong signal against intrusive business models by companies seeking to monetize through the illegal processing of personal data”.
The advocacy group also suggested that the fine “sends a clear message to law enforcement agencies working with these types of companies that such practices are illegal and grossly violate data subjects’ rights.” (In an even clearer statement last year, Sweden’s DPA fined local police €250k for illegally using Clearview, which it said violated the country’s Criminal Data Act.)
Clearview was contacted for comment on the Greek DPA sanction.
At the current count, the company has been fined – on paper – almost €50 million from regulators in Europe. Though it’s less clear whether or not it paid any of the fines, given potential appeals and the overarching challenge for international regulators to enforce local laws against a US-based entity if it decides not to cooperate.
The UK DPA told us Clearview is appealing its sanction in that market.
“We have received notification that Clearview AI has appealed. Clearview AI is under no obligation to comply with the enforcement notice or pay the fine until the appeal is decided. We will not comment on this matter as long as the legal process is ongoing,” the ICO spokesman said.
Clearview’s responses to previous GDPR sanctions have suggested it is currently not doing business in the affected markets. But it remains to be seen whether enforcement will work to keep it out of the region permanently — or whether it could evade sanctions by modifying its product in some way.
In the US, it turned its settlement with the ACLU as a “huge profit” for its company – claiming it would not be affected because it would still be able to sell its algorithm (rather than access to its database) to private companies in the US. US US
The US lawsuit settlement also included an exception for government contractors – suggesting Clearview may continue to work with federal government agencies in the US, such as Homeland Security and the United States. FBI — while imposing a five-year ban on providing its software to government contractors or state or local government agencies in Illinois itself.
It is certainly noteworthy that European data protection authorities have so far not ordered the destruction of Clearview’s algorithm, despite multiple regulators concluding that it had been trained on ill-gotten personal data.
As we’ve previously reported, legal experts have suggested there’s a gray area about whether the GDPR allows regulatory authorities to order the removal of AI models trained on inaccurately obtained data — not just the removal of data. to command the data itself, as seems to have happened so far in this Clearview saga.
But incoming EU AI legislation could be enacted to allow regulators to go further: The (yet conceptual) Artificial Intelligence Act includes powers for market surveillance authorities to “take all appropriate corrective action” to put an AI system in place. – including withdrawing it from the market (which essentially amounts to commercial destruction) – depending on the nature of the risk it poses.
If the passed AI law upholds the provision, it suggests that any leeway for commercial entities to use illegally trained AI models within the EU could soon be headed for some harsh legal clarity.
Meanwhile, if Clearview follows all these international orders to delete citizens’ data and stop processing data, it will not be able to keep its AI models up to date with new biometric data on people from countries where it is prohibited to process people’s biometrics – implying that its product’s utility will degrade with each fully enforced ban.