By Byron Kaye and Katie Paul
SYDNEY/NEW YORK (Reuters) – Three years after Meta shut down facial recognition software on Facebook (NASDAQ:META) amid a groundswell of privacy and regulator pushback, the social media giant said on Tuesday it is testing the service again as part of a crackdown on “celeb bait” scams.
Meta said it will enroll about 50,000 public figures in a trial which involves automatically comparing their Facebook profile photos with images used in suspected scam advertisements. If the images match and Meta believes the ad are scams, it will block them.
The celebrities will be notified of their enrollment and can opt out if they do not want to participate, the company said.
The company plans to roll out the trial globally from December, excluding some large jurisdictions where it does not have regulatory clearance such as Britain, the European Union, South Korea and the U.S. states of Texas and Illinois, it added.
Monika Bickert, Meta’s vice president of content policy, said in a briefing with journalists that the company was targeting public figures whose likenesses it had identified as having been used in scam ads.
“The idea here is: roll out as much protection as we can for them. They can opt out of it if they want to, but we want to be able to make this protection available to them and easy for them,” Bickert said.
The test shows a company trying to thread the needle of using potentially invasive technology to address regulator concerns about rising numbers of scams while minimising complaints about its handling of user data, which have followed social media companies for years.
When Meta shuttered its facial recognition system in 2021, deleting the face scan data of one billion users, it cited “growing societal concerns”. In August this year, the company was ordered to pay Texas $1.4 billion to settle a state lawsuit accusing it of collecting biometric data illegally.
At the same time, Meta faces lawsuits accusing it of failing to do enough to stop celeb bait scams, which use images of famous people, often generated by artificial intelligence, to trick users into giving money to non-existent investment schemes.
Under the new trial, the company said it will immediately delete any face data generated by comparisons with suspected advertisements regardless of whether it detected a scam.
The tool being tested was put through Meta’s “robust privacy and risk review process” internally, as well as discussed with regulators, policymakers and privacy experts externally before tests began, Bickert said.
Meta said it also plans to test using facial recognition data to let non-celebrity users of Facebook and another one of its platforms, Instagram, regain access to accounts that have been compromised by a hacker or locked due to forgetting a password.