Recently, Apple introduced a new child safety feature that can scan images related to child sexual abuse in iCloud Photos of users on behalf of governments. However, this decision was followed by severe criticism from privacy advocates.
Now, addressing the criticism, Reuters reports that the tech giant has clarified that it will employ the feature to scan images that have "been flagged by clearinghouses in multiple countries". The automated scanning system will only alert Apple once it crosses an initial threshold of 30 images so that a human reviewer can manage the issue.
The company said that the number would ultimately be decreased in time to come. It also made it clear that its list of image identifiers is universal and will be the same for any and every device it will be applied to.
Apple further elucidated that its implementation produces an encrypted on-device Child Sexual Abuse Material (CSAM) hash database acquired from no less than two or more organizations working under the patronage of separate national governments.
The tech giant didn"t say if the repercussions had any impact on its position but it did say that there was "confusion" regarding its initial announcements and also, that the program is "still in development".
Source: Reuters