Apple will roll out a system for checking photos for child abuse imagery on a country-by-country basis, depending on local laws, the company said on Friday.

A day earlier, Apple said it would implement a system that screens photos for such images before they are uploaded from iPhones in the United States to its iCloud storage.

Child safety groups praised Apple as it joined Facebook, Microsoft. Alphabet’s Google in taking such measures.

But Apple’s photo check on the iPhone itself raised concerns that the company is probing into users’ devices in ways that could be exploited by governments. Many other technology companies check photos after they are uploaded to servers.

In a media briefing on Friday, Apple said it would make plans to expand the service based on the laws of each country where it operates.

The company said nuances in its system, such as “safety vouchers” passed from the iPhone to Apple’s servers that do not contain useful data, will protect Apple from government pressure to identify material other than child abuse images.

Apple has a human review process that acts as a backstop against government abuse, it added. The company will not pass reports from its photo checking system to law enforcement if the review finds no child abuse imagery.

Regulators are increasingly demanding that tech companies do more to take down illegal content. For the past few years, law enforcement and politicians have wielded the scourge of child abuse material to decry strong encryption, in the way they had previously cited the need to curb terrorism.

A few resulting laws, including in Britain, could be used to force tech companies to act against their users in secret.

While Apple’s strategy may deflect government meddling by showing its initiative or complying with anticipated directives in Europe, many security experts said the privacy champion was making a big mistake by showing its willingness to reach into customer phones.

“It may have deflected US regulators’ attention for this one topic, but it will attract regulators internationally to do the same thing with terrorist and extremist content,” said Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory.

Politically influential copyright holders in Hollywood and elsewhere could even argue that their digital rights should be enforced in such a way, she said.

Facebook’s WhatsApp, the world’s largest fully encrypted messaging service, is also under pressure from governments that want to see what people are saying, and it fears that will now increase. WhatsApp chief Will Cathcart tweeted a barrage of criticism Friday against Apple for the new architecture.

“We’ve had personal computers for decades, and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content,” he wrote. “It’s not how technology built in free countries works.”

Apple’s experts argued that they were not really going into people’s phones because data sent on its devices must clear multiple hurdles. For example, banned material is flagged by watchdog groups, and the identifiers are bundled into Apple’s operating systems worldwide, making them harder to manipulate.

Some experts said they had one reason to hope Apple had not truly changed direction in a fundamental way.

As Reuters reported last year, the company had been working to make iCloud backups end-to-end encrypted, meaning the company could not turn over readable versions of them to law enforcement. It dropped the project after the FBI objected.

Apple may be setting the stage to turn on the encryption later this year, using this week’s measures to head off anticipated criticism of that change, said Stanford Observatory founder Alex Stamos.

Apple declined to comment on future product plans.

© Thomson Reuters 2021






Source link

Leave a Reply

Your email address will not be published. Required fields are marked *