Is Apple Doing Client-Side Scanning on Your Devices? | Rob Braxman

Categories
Posted in: News, Patriots, Rob Braxman Tech
SPREAD THE WORD

BA WORRIED ABOUT 5G FB BANNER 728X90

Summary

➡ Rob Braxman talks about whether Apple is scanning your device’s content, a process known as client-side scanning. This became a topic of debate after a YouTube video suggested that Apple was receiving information every time a Mac user used the finder. Some people argued this was a bug, while others believed it was evidence of client-side scanning. The article concludes that while there is no definitive proof, there is enough suspicious activity to warrant further investigation.

Transcript

I want to answer this question today. Is Apple scanning your device content or not? This tech where your device performs a local scan of your photos or other content and may report it to Apple is referred to as client-side scanning. Last year this became an issue of discussion between some security researchers, the first salvo being raised and massively communicated through Louis Rossman’s YouTube channel. With the power of over a million subscribers, Louis showed a research that a Mac was sending information to Apple each time finder was used. This apparently was because of suspicious activity coming from a Mac OS module called Media Analysis D.

This person detected traffic being sent to Apple and coming from this module each time a piece of media was displayed. Then quickly another bunch of people said that claim was debunked. They claimed that the communication of Media Analysis D with Apple was just a bug. And so people were left thinking, oh, we’re safe! Apple isn’t doing this CSAM thing after all, which was about Apple examining content on your phone or computer trying to spot illegal photos. Well, I’m here to tell you that the debunkers debunked nothing. In fact, their posts on X showed even further that Apple is doing something.

Media Analysis D is very busily working on all your Apple, Silicon Macs and iPhones and often using tons of CPU constantly. And the claim of debunking just means that they definitely misunderstood the intent and approach of how something like client-side scanning could be implemented. In this video we’ll analyze what we know and see if we’re just being conspiracy theorists in claiming that Apple is doing client-side scanning or if our fears are valid, stay right there. First, let’s explain this for the non-techies. The issue of client-side scanning hit the headlines when Apple announced over a couple of years ago that it was going to put code in the iPhone to scan the photos for illegal photos related to children called CSAM.

The idea was that if such photos were found on your phone that this would be referred to law enforcement. But the new approach here is that no human would be doing the search. Instead, the AI of the phone would scan the photos and using a tech they called neural hashing, it would indicate if photos contained something illegal. This caused an uproar because of fear of someone examining private photos and uploading them to Apple even if done by an AI. Apple then suspended the CSAM project because of the public pushback. You’d think that was the end of it except that the code for doing the scanning is still in the phone and it was more easily found on a Mac.

So the published details we have is only for a Mac. Now I don’t have a Mac with Apple Silicon which is the particular model being tested here. I’ll probably have to buy one to examine this in more detail later. In the meantime I’ll have to rely on what others have reported. Last year a researcher named Jeffrey Paul showed that a daemon program called Media Analysis D owned by macOS was communicating with Apple every time an image file was previewed and of course was concerned if this was part of client-side scanning.

Many researchers run firewall software to detect network traffic from a computer and it can be used to spot something unexpected and that’s how this little discovery was made. Jeffrey Paul was using the firewall product Lil Snitch. A user named Misk on X then looked specifically at the traffic being sent from the Mac to Apple when using Finder and found that it was just some empty transmission. No data was in the file other than the header. So he stated that this security threat was non-existent. Then other people used this information to make statements on the interweave saying that the client-side scanning issue was debunked.

Okay I want to make this clear. Nothing was debunked. So this is a foolish statement. Misk simply responded to a single event and it isn’t sufficient to make any kind of conclusion. This topic was triggered by someone who contacted me who was doing research on the possibility of client-side scanning and he sent me his documentation on what he’s seen so far which is a lot of unlocked activity of the Media Analysis D module. This person also tried to resend all permissions to Media Analysis D including access to photos and was unsuccessful.

This process has root privileges and you cannot shut it down. It will respond back if you do. This person also thinks that the module is communicating with Apple via a module called Rapport D. Unfortunately I don’t have enough time to initiate a separate research into this because the only Mac I have is running Mac OS 11.7 and that is too old. The pertinent part of the OS that deals with potential client-side scanning is likely Mac OS 13 and later and the AI chip is only on the Apple silicon computers not on Intel which is what I have.

But there’s enough information from what others have posted to make some observations. My approach to analyzing any kind of possible threat to privacy and security is not the kind of approach that security researchers do. Security researchers have a specific job and that is to identify current threats and will actively try to duplicate or uncover an attack. The disservice done by some with this approach is that if they can’t find it then they have the attitude that it must not exist or there is no threat which obviously is incorrect and untrue.

Many threats are never found until post-attack or after the revelation of a successful attack. Zero day hacks are obviously unknown until someone reveals them. I know of many threats that exist and results of which are observable but no security researcher has as of yet uncovered how it is done. Often the common approach of a researcher is to observe network traffic emanating from the device and then using a tool to fake root certificates which will break HTTPS encryption they can observe any HTTPS traffic coming from the device and many of these more beginner researchers will make the claim that this is confirmation that there’s no trace of activity if they cannot see that HTTPS traffic.

But using this tool does not uncover symmetric encryption nor does this tool identify side channel communications and this does not consider delayed transmissions. I’m going to show you the flaws in this and examine this in light of the problem of Apple client-side scanning using media analysis d. Security researchers are very important to our cyber security and they are critical experts that make our infrastructure more secure. But as I said my approach is different. I am not a security researcher that is not my role I’m more of a canary in a coal mine it allows me to think ahead of the threat.

I examine the technology involved and then I think about how I would develop software to make use of that technology. Mostly I will think about how the tech can be used in an evil way and I found that historically I’ve been right. I’ve developed software for many industries in my career as a chief software architect and have experience in just about every area of software development. I’ve toyed with some unexpected ways to attack in my long career of software development and it’s interesting to see how similar approaches appear in the wild.

As you can see this is completely upside down from the approach of a security researcher. I want you to bear this in mind as we examine Apple’s client-side scanning issue. Now let me get into what these researchers have shown on a Mac about media analysis d. I’ve not found any information on what the equivalent module is on an iPhone and what traffic it sends so we’ll stick to the Mac. Somehow over these many comments about media analysis d communicating I see that no one is disputing that this little module is in fact scanning photos and creating a descriptor file of some sort that Apple has called a neural hash.

So every photo has this. This is the result of the analysis done by the AI chip on a Mac using Apple Silicon and of course the equivalent is on an iPhone with the bionic neural engine. On a blog by cybersecurity guru Bruce Schneier he posted that some compared the hashes and found that the size of the images didn’t change the hash but he also showed that two images created the same hash which shows that the AI is at work here and not just pixel matching. Now look here this was posted as showing some of the activities in a Mac relating to media analysis d and it shows some activity related to media analysis dot framework.

First the poster named MISC showed some activity in the system related to media analysis d and then from there managed to find some other related content like this. The listed items here are very benign and I would have searched for more detail on there if I had a Mac M or something but at least there is no doubt that it is doing image scanning. This content to me implies some sort of training model being uploaded to the AI. Here are the modules that apparently are connected to the media analysis framework.

You can just make some observations in the module names which include references to recognition and camera motions. This is what chat GPT says about what media analysis d is for and note that even chat GPT is aware of privacy considerations with this module. This media analysis d is truly a huge investment multi-year apparently with combined efforts on both the iOS and Mac OS platforms and really extraordinary capability built in to scan images and now you’re telling me that Apple did this primarily to protect us against CSAM but everyone has really friendly little statements about the purpose of this photo scanning.

Look at this very benign statement from one of these security researcher types. The process is designed to run machine learning algorithms to detect objects in photos and make object-based search possible in the photos app. Wow chat GPT has stated that this has facial recognition capability so let me understand this from my viewpoint and how I look at tech. Do you not see how the AI on an Apple device that can scan photos, recognize faces, motions, situations, scenarios, objects, and photos all without your knowledge or consent can be used for surveillance? I’ve explained how I think this tech developed in prior videos.

I talked about the terrorist attack in San Bernardino, California in 2017 and how Apple was being forced to break into a phone by the FBI and Apple refused but having the AI examine content puts Apple in safe ground and allows it to support the state. Every device in the world can be examined for content as required by government and no physical person examines it. This frees Apple from the burden of having to break into phones. They can claim that iPhone is privacy all day. Yeah apparently privacy from humans but not privacy from the AI.

Now after the San Bernardino attack and prior to this tech being revealed, the heads of three-letter agencies took to the press and said the solution to their encryption problems was to scan for content pre-encryption. They were already broadcasting that they wanted client-side scanning. Look at the press releases and interviews from the CIA director Brennan at the time if you don’t believe me. I remember this. They wanted pre-encryption scanning meaning on the device, device content scanning. This is not even conspiracy theory. I just have to follow the sounds. Quack. After Apple announced the CSAM technology, sure enough the governments of the EU, UK and US wanted to pass laws to allow for client-side scanning technology to save our children.

Quack. Almost exact timing too. All the laws were being considered last year. Wow. If it quacks like a duck it must be a duck. I hear nothing but quack, quack, quack. So clearly no researcher has made a claim that Apple is not doing neural hashes today on all your photos on your devices. If anything they are showing the opposite. That photos are getting this little tag from the AI chip. The only thing these researchers are saying is that there is no proof that these neural hashes are being uploaded to Apple. And that’s the flaw in the thinking.

As a software architect, if this were my project, why in the hell would I upload the neural hashes of billions of devices in the world to an Apple server? That’s crazy inefficient and bad software design. In fact, why even send anything to Apple other than a yes or a no to acknowledge if some search criteria based on neural hashes found results or not? An even better design, why even send a no? We only need to see the yes. And if there is not a yes transmission, why even generate traffic? So the idea for the original CSAM plan was that Apple would send image search requests to the AI of the phone, encrypted of course, and translate it into these hashes which would be indecipherable.

These instructions could be downloaded by the phone weekly, daily, whatever. This would just be a small file. Then the scanning module media announces D in the case of a Mac would then see if there’s a match. Now what happens if there’s a match? Obviously this kind of instruction to an AI has to be guarded with the highest security. It has to be encrypted. It also has to be obfuscated. And the worst problem for a security researcher is that this can only be duplicated on the device with a search match. The vast majority of phones will have no traffic.

Remember I told you earlier that the X user miss detected in the early version of the Mac OS media analysis D that there was just an empty HTTPS header being sent. Well that could have been the test no at the time. This was removed since it was obviously unnecessary. But this offers clues that traffic need not be sent if there’s no match. Here’s the other little detail that I mentioned in my other videos. A match could be transmitted to Apple indirectly. For example using the Apple mesh network using BLE. Apple mesh is encrypted. It is not HTTPS based.

It uses symmetric encryption and is part of the BLE standard. It is an existing tech and the transmission of a match could occur with or without internet. The BLE message could be transmitted even if the phone was off for example. An outsider cannot decrypt the BLE radio transmission. This is what I would call a side channel transmission. Makes it harder to spot. If Apple did any of the things I said here then how is the security researcher going to detect this? If I were Apple this is what I would do to hide it.

Also why use any kind of public key encryption infrastructure? They should be smart enough to use symmetric encryption which I’m sure can be done by the AI chip. So this would not be decrypted by the common methods used by security researchers to search for common traffic. This is uncommon traffic. Also some researchers are saying that some of the traffic intercepted may just be the AI model being updated like what I said the gentleman who reported this to me was saying it was using a module called Rapport D. So that’s another consideration because it may be hard to spot significant content in the middle of all the noise.

Folks a very massive investment was made into doing client size scanning or scanning at the device level. If the true object was just simple object identification for photo searchers as one of the pro-apple researchers said then why do you do this at the device level? Why not just scan iCloud? Practically every photo is on iCloud including nude photos made by people. That could have been done without any significant effort and these scan results could be downloaded to the phone and matched to the original images. Instead now we’re focused on AI chips and this is supposed to be a good thing that newer phones and newer computers have more and more powerful AI chips.

What are the AI chips for? And the new AI chips are becoming more powerful. Scanning your content is surveillance. I don’t know how you can wrap this any other way. AI chips are able to be efficient little robots all seeing robots but unfortunately you don’t get to instruct the robots. The robots do whatever HQ Apple and Google wants it to do but you’re buying the devices with your money. It’s your phone. Well not really. This video focuses on Apple but I want to alert you to the fact that the AI chip was also added to Google pixels since pixel 6.

So this is a more general problem with all newer phones. The only phones guaranteed to not have client size scanning are those that are running a de-googled or Linux OS as I’ve talked about in last week’s video. Folks I do have a store where we sell pre-made de-google phones that are phones that do not allow client size scanning. Currently what we have in stock are pixels of various models with most being near $400 and these are flash with Calix OS or some other OS of your choice. We will also flash your phones for you as a service if it’s one of the supported models.

Later on this year we’ll have a new model the Brax 3 but that is still many months away. This model will be for USA use specifically and not for Verizon. You can also use the phones without a SIM card by using our Brax virtual phone product. We also have other services like Brax Mail and Bytes VPN. All these are on my platform Brax Me. You can sign up there and you will see the store. We do not collect identifying information to sign up so don’t worry about that. Thanks for watching and see you next time.

[tr:trw].

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

Author

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

BA WORRIED ABOUT 5G FB BANNER 728X90

SPREAD THE WORD

Tags

Apple client-side scanning Apple device content scanning Apple information retrieval debate Apple privacy concerns Apple's Apple's data collection methods Apple's device scanning controversy Apple's suspicious activity client-side scanning evidence client-side scanning investigation client-side scanning proof Mac finder bug or feature Mac user finder usage Mac user privacy issues YouTube video on Apple scanning

Leave a Reply

Your email address will not be published. Required fields are marked *