Apple announced three different "Child Safety" features [1]. The most significant feature is scanning photos on users' devices by calculating a hash for each image and comparing these hashes to hashes of known Child Sexual Abuse Material (CSAM). While we can all agree it is important to "get those bastards," and we should look out for our little ones, security experts immediately raised concerns [2,3,4]:
CSAM fingerprints are deliberately not bit-perfect, so innocent files may be flagged and uploaded to Apple's servers.
This system can detect any image as long as the hash is present in the database. There is no technical requirement for those hashes to be CSAM. For example, it's possible to detect political campaign posters or similar images on users' devices by extending the database.
Given that it's possible to generate a false positive, it is also possible to deliberately create images that match a given hash. So, for example, someone who wants to get another person in trouble can send them innocent-looking images (like images of kittens) and manipulate those images to match a hash of known CSAM.
This site is a proof of concept for collision attacks. The images of the kittens are manipulated to match the hash of the image of the dog (59a34eabe31910abfb06f308). As a result, all images shown on this page share the same hash. When these images are both hashed with the Apple NeuralHash algorithm, they return the same hash. Asuhariet Ygvar created a Github repo with instructions so you can verify this for yourself [5].
Apple has stated that if a user doesn't use iCloud photos, no part of the CSAM detection process runs. So you can opt out by disabling iCloud photos. The downside is that your photos are no longer synced between your devices (or backed up to the cloud).
You can sign the Open Letter Against Apple's Privacy-Invasive Content Scanning Technology [6] or the petition of the Electronic Frontier Foundation [7].