Apple Responds to Photo Scanning Backlash by Giving a FAQ

“Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?”
Illustration: Front Page Tech

We kind of, sort of, saw this coming. After some backlash and concerns among privacy advocates and notable whistleblowers following Apple’s announcement of “Expanded Protections for Children” – that would scan its users’ iCloud photos – Apple last night published a six page FAQ aimed to address those concerns.

The report begins:

“Since we announced these features, many stakeholders including privacy organizations and child safety organizations have expressed their support of this new solution, and some have reached out with questions.”

“This document serves to address these questions and provide more clarity and transparency in the process.”

– Apple

Originally, when we covered this here, we broke it up into a two-parter: The first, addressed “communications safety in messages” that notified parents when their child received or sent a sexually explicit photo. The second, tackled the more controversial issue of “enhanced detection of Child Sexual Abuse Material (CSAM)” that would scan all of Apple’s users’ iCloud photos. The reason we chose to do that is because it appeared to be two separate features, and in their newly published FAQ, Apple confirms this.

From the FAQ:

What are the differences between communication safety in Messages and CSAM detection in iCloud Photos?

These two features are not the same and do not use the same technology.

Communication safety in Messages is designed to give parents and children additional tools to help protect their children from sending and receiving sexually explicit images in the Messages app. It works only on images sent or received in the Messages app for child accounts set up in Family Sharing. It analyzes the images on-device, and so does not change the privacy assurances of Messages. When a child account sends or receives sexually explicit images, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view or send the photo. As an additional precaution, young children can also be told that, to make sure they are safe, their parents will get a message if they do view it.

The second feature, CSAM detection in iCloud Photos, is designed to keep CSAM off iCloud Photos without providing information to Apple about any photos other than those that match known CSAM images. CSAM images are illegal to possess in most countries, including the United States. This feature only impacts users who have chosen to use iCloud Photos to store their photos. It does not impact users who have not chosen to use iCloud Photos. There is no impact to any other on-device data. This feature does not apply to Messages.

– Apple

Other major questions addressed in the first section that brings features to the Messages app:

  • Who can use communication safety in Messages?
  • Does this break end-to-end encryption in Messages?
  • Does this feature prevent children in abusive homes from seeking help?
  • Will parents be notified without children being warned and given a choice?

The full answers to those questions can be found here but it’s worth noting that most of those were answered with a “no” from Apple.

The second section goes on to address CSAM detection with these major questions:

  • Does this mean Apple is going to scan all the photos stored on my iPhone?
  • Will this download CSAM images to my iPhone to compare against my photos?
  • Why is Apple doing this now?

Finally, their last section titled “Security for CSAM detection for iCloud Photos” answers these questions:

  • Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?
  • Could governments force Apple to add non-CSAM images to the hash list?
  • Can non-CSAM images be “injected” into the system to flag accounts for things other than CSAM?

Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?

No. The system is designed to be very accurate, and the likelihood that the system would incorrectly flag any given account is less than one in one trillion per year. In addition, any time an account is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.

-Apple

Let’s just hope their system was actually “designed to be very accurate.”

For the full answers to these questions I encourage you to read them here. You won’t get much more than “milk and cookies,” but at least they’re trying. Full FPT episode on this below:

What’s your Reaction?
Haha
81
Haha
Love
237
Love
Hmmm
173
Hmmm
WAAAT
43
WAAAT
Noooo
255
Noooo
WTF
96
WTF
Prev
44% Of Current iPhone Owners Plan to Upgrade to iPhone 13, Says Worthless Survey

44% Of Current iPhone Owners Plan to Upgrade to iPhone 13, Says Worthless Survey

According to a new — very official and definitely not just from a small,

Next
Kuo Agrees: Redesigned MacBook Air Coming With Multiple Colors, Launching 2022

Kuo Agrees: Redesigned MacBook Air Coming With Multiple Colors, Launching 2022

Stop me if you’ve heard this before: A newly redesigned MacBook Air is

You May Also Like