Client Side Scanning – See What You See – The Matrix is Complete! | Rob Braxman Tech

SPREAD THE WORD

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

 

📰 Stay Informed with My Patriots Network!

💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support My Patriots Network by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow My Patriots Network Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/MPN

🎥 Rumble: Rumble.com/c/MyPatriotsNetwork

▶️ YouTube: Youtube.com/@MyPatriotsNetwork

📘 Facebook: Facebook.com/MyPatriotsNetwork

📸 Instagram: Instagram.com/My.Patriots.Network

✖️ X (formerly Twitter): X.com/MyPatriots1776

📩 Telegram: t.me/MyPatriotsNetwork

🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork

 

 

 

Summary

➡ Rob Braxman Tech talks about how google recently added a new app to Android 9 Plus called Android System Safety Core, causing concern among users. This app uses machine learning to classify media and detect unwanted content, similar to features introduced by Apple and Microsoft. Critics argue that this is a form of client-side scanning, a technology that allows tech companies to analyze user data under the guise of providing helpful features. This trend towards AI companions that “see what you see” is seen as a potential invasion of privacy, as it could record and analyze all user activities.

 

Transcript

Google recently auto-installed an app on Android 9 Plus called Android System Safety Core. This caused people to panic since it was not made clear what this was all about. But here are the keywords used. Google says it is for sensitive content warnings, or generally they state that it performs classification of media to help users detect unwanted content. Now listen to what some so-called experts have said about this. The maintainers of the Graphene OS operating system in a post shared on X reiterated that Safety Core doesn’t provide client-side scanning and is mainly designed to offer on-device machine learning models that can be used by other applications to classify content as spam, scam, or malware.

I’ve never heard a more fake explanation than this. Now I will tell you how this is the biggest pack of lies from shields of big tech that lay claim to cybersecurity knowledge. It should be obvious what this little module is for. I’m second tired of false information being disseminated to the masses to make you all feel good and make you feel that you have nothing to worry about. You have a lot to worry about. Of course, Android System Safety Core is about client-side scanning, but it will be justified by fake information that it is some benign feature that you’ve always needed.

The actual truth is that I’ve been waiting for this module for a long time. It had to come out, and I was wondering why it was taking so long. This module is part of what I call the See What You See technology and is directly tied to AI. It completes the circle to include all of big tech in client-side scanning. If you want to understand how this impacts you, stay right there. Once again, just like Windows Recall and Apple CSAM, some new capability is added to your phone that supposedly supplies features that you didn’t ask for.

Then it’s wrapped in legal speed that makes you think you’ve always wanted this. This is from tech companies gaslighting you when the cat is already out of the bag. The real intent with these features is now obvious and stated in their own words, which I will replay for you in a moment. I’ve been alarmed by this new technology that’s been introduced going all the way back to around 2019. This has now been confirmed to be the actual direction of the big tech giants, Apple and Microsoft. I was just wondering when Google would start doing this.

And now it is clear that Google is implementing the exact technology because you will see it matches from what has happened in the past. I’ll get back to that, but first a little tech history. If you follow what big tech companies have actually already done, you should detect a pattern. Now, I’m recalling from memory, so my dates will not be exact, but the first time I heard of this thing called sensitive content warnings came from an Apple keynote, probably around late 2019. Around that same time, it was also announced that iPhones would be able to automatically categorize photos locally and be able to identify key events.

It would also be able to perform facial recognition to recognize your friends on device. When features like this are announced, the feature being provided is called AI image processing. It means some application is able to scan photos and identify objects in the photos. Precisely enough that nudity can be spotted, optical character recognition, OCR can be performed, and facial recognition can be implemented. This is done by an AI model. Shortly after announcing this technology, which resulted in reorganizing of your photo galleries and iOS into key timeline events, the next big announcement was that in 2021, Apple announced that it was going to implement scanning of photo content to identify illegal photos called CSAM.

Now, Apple fanboys were quick to dismiss this because Apple claimed it was not going to be used to spy on your photo content and your photos will not be screened by humans, meaning all the identification or classification of photos was going to be done on device, or as we say in tech, client side. And later this was buried by cybersecurity techies as a nothing burger because Apple claimed it was suspending the CSAM project. However, the module doing the scanning media analysis D continues to run even today. Then in 2024, Microsoft surprised us with the release of Windows Recall.

This feature as explained then was another gaslighting attempt because supposedly this is some feature you really needed for Windows to take screenshots off your computer every five seconds and then have the screenshots analyzed for content. Supposedly then this would enable you to remember what you did in the past, thus the moniker Windows Recall. But in December 2024, the real reason for this technology was laid out in a more obvious way by Microsoft AI CEO Mustafa Suleiman. We are on a mission to create a true AI companion and to me an AI companion is one that can hear what you hear and see what you see and live life essentially alongside you.

This technology wasn’t about capturing screenshots so you can recall things in Ducking Windows. This technology is so the AI companion gets to sucking know you. Here we are in 2025 and the next thing announced is OpenAI Operator. Again, the technology is the same as Windows Recall. Take screenshots off your screen and evaluate the content. The point is for the AI to know exactly what you are doing and the OpenAI Operator demos does show it is using screenshots to see what’s happening on screen and this is the basic technology being used to power most of the AI agents today.

This technology being implemented everywhere is obvious and now apparently we’re being told and being gaslighted by supposed cybersecurity sucking experts that this is for your safety. Yup, spotting sensitive content. Come on people, I’m not stupid and I don’t think any of you are either. The direction of the technology is already made clear. I didn’t ask for sensitive content warnings and why is the technology for sensing nudity the same technology for AI to see what’s on your screen. What you are seeing folks with this introduction of Android System Safety Core, despite the misleading name, is another version of what I call See What You See technology.

By the way, I have no idea if Safety Core is the complete module or if it uses other pre-existing AI libraries but it should be obvious to all that it is using AI image processing technology. When Apple first started to implement this it was by scanning photo files. Today Microsoft, OpenAI and I’m sure Apple Intelligence have now shifted to doing screenshot scanning. Thus there is no reason why Google wouldn’t do the same. There’s obviously no actual difference here. A screenshot is basically a capture photo of the screen. So there is technically no actual difference.

It’s just a difference of the source of the media. As stated in the descriptions of Android System Safety Core it can analyze images. That is clear and it’s published capabilities. Now what is different between analyzing a screenshot versus some existing photo? With all these technologies, the original one with Apple which is called Media Analysis D, Microsoft Windows Recall, OpenAI Operator and now with Android Safety Core, the intent is the same. See what you see, record it, analyze it and the point of this has always been for the embedded AI in these devices to know you intimately.

The full mantra of the AI secret. You now know the intent. So let’s move on to understanding what the direction is and how this is going to affect our lives. The direction of Microsoft Copilot, Apple Intelligence, OpenAI and Google Gemini will obviously be all the same. Have an AI know you so well that it will be your new AI companion. To do this all these see what you see technologies will take what you do on device and summarize your activities via screenshots. You’re writing a memo, screenshot. You’re writing a love letter, screenshot.

You’re looking at porn, screenshot. You’re communicating using end-to-end encrypted apps, screenshot. You’re creating a new crypto wallet and putting in your seed, screenshot. You’re reacting to some political posts in social media, screenshot. So to make this entirely clear, when your device uses an embedded AI, an AI embedded in the OS, you will now expect that it will be recording everything you do. It is expected then that the AI will use this information to create a pattern of understanding of who you are, ostensibly so it can be a better AI companion.

Some of these cyber security people do an extremely shallow analysis just like they did of Apple’s media analysis a couple of years ago. They were defending this module as not doing anything nefarious even though the module was using the beginnings of see what you see technology. That shallow analysis is now proven to be completely on the wrong track because we already know what the intent is in big tech’s own words. But the way these techies justify the supposed safety of these modules is by putting a zucking firewall on media analysis, the Windows Recall and Android system safety core to prove that these modules weren’t doing any external communications.

Therefore, in their opinion, they had to be safe. I sure hope we’re not entrusting our national security through these folds because this is such a lame analysis. As we already know, there is no reason for these see what you see modules to communicate with any party since that is not their purpose. Their purpose is to record and create a history for the AI, which they all do. Apple’s media analysis was creating a file with the analysis of each photo. Windows Recall was summarizing every screenshot, and I’m certain Android system safety core will be doing the same thing.

But remember, these modules are just AI agents. They are image processing agents. They can do OCR and image recognition. They are there to simply provide the data for the AI to use. There are many other agents. I’m sure on phones that other agents are responsible for other sensors, such as those that can be used to intercept the speakers and hear what you hear. When the device starts analyzing what you’re doing and starts creating a record of what is happening on your device, then it is clearly doing a scan. And there is absolutely no way around the reality that these see what you see technologies are doing client-side scanning.

I’m sure Microsoft, Apple, OpenAI, and Google are thinking about mitigation techniques to categorize certain elements of what it sees as private and thus will in theory block that out from the AI’s perception. Good luck with that, as there will be a million rules to create a new classification for what is private. But these rules will be artificial and I’m sure these tech companies will do their best to obscure the fact that this technology can break into an encryption, see your passwords, or see your nude photos. You cannot have secrets from the AI.

The reality is you will have a Trojan horse on your device watching what you do. Client-side scanning is someone watching over your shoulder that can see what you see. The buzzword they all say is that no third party will get access to your raw data and I agree. There’s absolutely no reason for any third party to get a copy of your data because that is now how it’s supposed to work. You will not be a Jeff Bezos with your nude photos spread to the internet. I don’t expect your screenshots to ever get exported to some other server.

This makes sense as they can’t possibly sell this concept if your data ever leaves the device. But here’s a kicker here. The AI will know everything that’s going on. Will these big tech firms put some blinders on the AI so it won’t see certain things? I doubt that. What they will do, and this is easier to implement, is to leave the analysis of the screenshots alone and instead focus on AI ignoring certain content when it is relating to you as your companion. Meaning, it will lie to you. So in theory, if you ask your AI companion to read your crypto seed, it should feign ignorance and act like it didn’t see it, though it will obviously be on record.

That’s how the gaslighting will continue here folks. This is really the big point here. Once see what you see technology is put on device, it cannot unsee. The device will know and the AI will know. So here’s the big question. We know that client-side scanning is being done. The question is, will the client-side scanning result be known to an external party? And folks, the owner of the OS, Microsoft, Apple and Google, are not third parties. They are first parties. Just like they claim that third parties cannot see your location, these first parties know your location 24-7.

Any doubt about this? Go ask the J6 writers that have been charged using the location data from the Google sensor vault. You are being fed lie after lie. And these cybersecurity fakes are complicit. The AI is communicating with the cloud servers. It is already stated by all these companies that though a big bulk of AI processing will be done on device, that a channel will be available to communicate to the main AI servers. So it should be clear that there is nothing to stop an AI from getting external instructions and having to tell someone else what it sees.

While it is clear that client-side scanning is an essential element of the AI companion, what is not clear to all is where the surveillance enters here. Recently, it was posted on X that the UK secretly demanded a backdoor to Apple iCloud and it is apparently a crime to reveal such a government demand. And I presume such a demand would be handled using client-side scanning. This is also the same procedure with the Patriot Act if the government seeks information from any platform, the platform must provide the data and must not reveal the request.

So until we have another Snowden whistleblower, we will not know if the government demands access to client-side scanning data. And it would be illegal to reveal that such a request was made. Client-side scanning is an integral part of this tech history. Back during the Obama days, then CIA Director Brennan constantly stated to the media that end-to-end encryption was limiting access to data available to three letter agencies. Encryption was anathema to intelligence gathering. And you will recall my prior videos where I harkened back to the San Bernardino terrorist shooting event when Apple refused to help the FBI retrieve the contents of the suspect’s iPhone.

The end result of that refusal by Apple was that Director Brennan stated at the time that the solution to end-to-end encryption was to scan the data pre-encryption. This was the secret sauce. Scan data pre-encryption means having someone look over your shoulder while you’re using your favorite end-to-end encrypted app. In other words, scanning pre-encryption is client-side scanning, just as the three letter agencies wanted. By the way, CIA Director Brennan has been stripped of his security clearances by Trump. This may be the guy that triggered the implementation of client-side scanning everywhere. And today, the client-side scanning tools exist in every mainstream OS except for Linux.

Just recently, the US agency CISA, the Cybersecurity and Infrastructure Security Agency, was asking US people to use end-to-end encryption. Now, why is that? How do you go from Brennan spouting the evils of end-to-end encryption to now having US CISA encouraging us to use end-to-end encryption? Well, it should be obvious once again. E2E is no longer an impediment to intelligence gathering. Even Zuckerberg recently stated this too. Client-side scanning, as desired by three letter agencies, is now globally in place. The last piece was Android, and it is now here. The way this will work is the same way that Apple already described how they would have implemented the CSAM scanning.

You see, that was just a test of the see-what-you-see technology. All they’d have to do is ask the AI of every device, and only the device with the desired content will generate traffic to respond to HQ. This method of searching eliminates traffic from billions of devices. Only the needles in a haystack will raise the signal. Only the AI that detects a match would respond to HQ that the match is successful. Given this kind of approach, how are you supposed to detect this traffic using current methods? You cannot. In foolish statements by cybersecurity people that there is no evidence that this is going on, it’s a sign that they lack that technological expertise to understand the issue.

Let me remind you again. If you want to not participate in client-side scanning, the only possible option now are open source operating systems. Linux being open source is not going to have embedded AI or client-side scanning. You can install a local open source AI on Linux and run it safely, so not all AI will do client-side scanning. It only works on embedded AI in an OS. On the phone side, any de-Google Android open source project, AOSP-based operating system, will also be safe from client-side scanning. In summary, be careful who you listen to.

Very few people have the expertise to understand the privacy issues I bring up here. I’m not the FUD guy. Fear, uncertainty, and doubt. That’s what shields want you to think. In the meantime, these same groups will tell you that an iPhone is a very safe device, so they will zuck you with their disinformation. Folks, this channel, as you know, doesn’t have sponsors. We are funded solely by this community that uses product we create or contribute to Patreon, locals, and YouTube memberships. Thank you for your support. Just to remind you, we have created the Brax 3 phone, which is a privacy phone that is a community effort involving multiple parties from the open source OS to the firmware and hardware.

This project is currently on indiegogo.com and has started production. It is still available at an early bird price, so check that out. It will be shipping in March. We have other products that are important to your privacy, like the Brax Virtual Phone, Braxmail, ByteCPN, and more. These are on our site on Braxme. Please join us there together with over 100,000 users who will discuss privacy issues daily. Thank you for watching and see you next time. [tr:trw].

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

Author

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.


SPREAD THE WORD

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our

Patriot Updates

Delivered To Your

Inbox Daily

  • Real Patriot News 
  • Getting Off The Grid
  • Natural Remedies & More!

Enter your email below:

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

15585

Want To Get The NEWEST Updates First?

Subscribe now to receive updates and exclusive content—enter your email below... it's free!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.