Apple will only scan abuse images that are flagged in multiple countries

Spread the love

Apple responds this week to criticism over its plans to scan iCloud for photos of child abuse. Among other things, the company says it will only mark images that appear in at least two different child safety databases.

Apple hopes that this will reduce the number of false positives of this feature against child sexual abuse material or csam. The company says that these databases must be in multiple countries. “The on-device encrypted child abuse database contains only data independently submitted by two or more child safety organizations, located in separate jurisdictions, and thus not under the control of the same government,” Apple wrote in a report on the upcoming safety feature. .

In theory, this should ensure that countries cannot ensure that images without child abuse can be flagged by Apple’s system. The company also says that the tech giant will not flag an iCloud account until the system identifies 30 or more images as child abuse. This threshold would have been chosen to provide a ‘drastic margin of safety’ to avoid false positives. This threshold may change over time as the system’s performance in the real world is evaluated.

Image via Apple

Apple claims that it will not be able to decipher any data before this threshold of 30 images is crossed and it will not be possible for Apple to verify the number of matches for a particular account. After crossing the 30-image threshold, Apple servers can only decipher vouchers that match the positive matches. The servers don’t get any information about other images. These vouchers would provide access to a visually derived version of a positively marked image, such as a low-resolution version. These are then humanly reviewed before the tech giant closes an account and forwards it to a child safety organization.

Apple would also make it possible for third parties to audit the csam database. “An auditor can confirm for any given root hash of the encrypted csam database in the Knowledge Base article or on a device that the database was generated only from a cross of hashes from participating child safety organizations, with no additions, deletions, or changes.” Security organizations do not have to share sensitive materials to facilitate such an audit, Apple reports.

Apple announced earlier this month that it would add a csam detection feature to iOS 15 and iPadOS 15. The feature is used to scan photos users upload to iCloud for child abuse, using databases of hashes from child safety organizations such as the National Center for Missing and Exploited Children in the US. The feature will be integrated in iOS 15 and iPadOS 15 for all countries, but for now only enabled in North America. The tech giant writes that it will not scan for unknown images, which do not exist in csam databases.

When announced, several parties criticized Apple’s plans. According to the Center for Democracy & Technology, the company would establish a surveillance and censorship infrastructure that is vulnerable to abuse worldwide. Apple promised to use the photo scan function only to detect child abuse. Apple’s head of software, Craig Federichi, acknowledged that the company has created confusion with the announcement, but the tech giant is sticking with the update. The company has not changed its rollout plans for the feature following the criticism.

You might also like