The most controversial facial recognition startup is partnering with Ukraine

The most controversial facial recognition startup is partnering with Ukraine

The Ukrainian Ministry of Defense is using the biometric technologies of Clearview AI, the controversial New York startup that trains facial recognition algorithms, which are then sold to law enforcement or private companies, with photographs trawled online without consent. Reuters revealed it. The company has offered its services free of charge to the Ukrainian government to identify possible Russian infiltrators, reunite refugees with their families and identify the dead caused by the military invasion unleashed by Moscow for almost twenty days now.

Nell 'last year, Clearview AI's services were outlawed in several countries, including Canada, Australia, UK, France and, just a few days ago, Italy too. The various authorities for the protection of personal data have found that the company is guilty of the collection without consent of photos, obtained through data scraping, and of providing a service similar to network surveillance and profiling.

Starting in 2017, the startup has created a database of over 10 billion personal images, using them to train its biometric algorithms. Of these, over 2 billion come from the Russian social network VKontakte (VK), which is also very popular among the military, as demonstrated by Bellingcat's investigations during the Russian occupation of Crimea.

Officially, Ukraine is receiving free access to Clearview AI's search engine to analyze the faces of people stopped at roadblocks, identify Russian agents, help the government fact-check war-related online posts, reunite refugees with their families, and recognize the dead in the field. However, the use of this technology could end up "harming the very people it is supposed to help," Alber Fox Cahn, executive director of the Surveillance technology oversight project in New York, told Reuters.

"Once these systems and associated databases are introduced into a war zone - said Cahn - there is no control over how they will be used or abused." Clearview AI technology could misidentify people at checkpoints and during warfare, with risks of unwarranted arrests, refused rescue, or even worse consequences.