Specifically, the feature warns children and parents when a device is sending or receiving images that contain nudity. This new child safety feature will be available to accounts set up at families in iCloud for iOS 15, iPad OS 15, and macOS Monterrey.

What is the Feature and How Does It Work?

In a technical document, Apple explains that it has developed these tools with the help of child safety experts. The new feature on the Messages App will enable parents to help their children navigate online communication while still keeping these conversations private. This means that Apple cannot see the private conversations. When a child receives a sexually explicit image, the photo will be blurred, and the child will be warned. The child will then be presented with useful resources and will be informed that they need not open the image. Parents will receive a message if the child opens the image. A similar process is followed when sending a sexually explicit image. The child is warned before sending the image, and the parents are notified if the child sends the image. To check for sensitive content, the Messages App will use on-device machine learning. This will analyze attached photos and determine if it is sexually explicit.

New Tools to Detect Child Sexual Abuse Material

The growing use of encrypted communication keeps us safe and protects sensitive information. However, the increase in Child Sexual Abuse Material (CSAM) is a concern for law enforcement agencies around the world. According to the technical paper, Apple is keen to address this problem. Apple plans to use new technology in iOS and iPad OS to detect CSAM images stored in iCloud Photos. It aims to do so by using a technique known as hashing. Hashing is a way of identifying similar images. Below is how Apple plans to tackle CSAM: First, they will use NueralHash technology to analyze images and create unique specific numbers. This will assign an identifiable hash to every image. Next, they will conduct an on-device matching process known as “private set intersection.” This will check images against known CSAM content to check for matches. Apple will conduct a manual review in case of a match, and provided it meets a certain threshold. In case of a match, Apple will disable the user’s account, and send a report to the National Center for Missing and Exploited Children (NCMEC). According to Apple, this process has significant privacy benefits over existing tools.

Concerns of Security Experts

While lauding these tools for their benefits, cryptographers have also advised caution. The potential for misuse is a common concern for all workarounds to encryption. Apple’s machine learning and on-device filtering features raise similar concerns. Currently, it aims to flag sensitive and CSAM content, a truly challenging problem.

Apple Introducing Child Safety Features to Flag Sensitive Content   VPNOverview com - 63Apple Introducing Child Safety Features to Flag Sensitive Content   VPNOverview com - 22Apple Introducing Child Safety Features to Flag Sensitive Content   VPNOverview com - 53Apple Introducing Child Safety Features to Flag Sensitive Content   VPNOverview com - 36