Slave AI Cameras Monitoring Us Everywhere? Big Brothers 1984 Finally Complete | Rob Braxman Tech

Categories
Posted in: News, Patriots, Rob Braxman Tech
SPREAD THE WORD

BA WORRIED ABOUT 5G FB BANNER 728X90

Summary

➡ Rob Braxman Tech discusses a technology called Crowdsourced Facial Recognition, which uses devices to recognize faces and report findings to a central system. This technology, developed by companies like Apple, Google, and Amazon, can potentially find anyone, even if they’re offline. The article raises concerns about privacy, as this technology could be used for mass surveillance. It also mentions a specific feature by Apple, designed to detect illegal material, which was put on hold due to privacy concerns, but the technology behind it remains active.
➡ This article discusses the potential privacy risks of AI technology, particularly in mobile devices and security cameras. It highlights how these devices can use facial recognition to track individuals, even in the background of photos, and send this information to a central hub. The author expresses concern about the misuse of this technology, such as criminals paying insiders to locate someone. The article ends by encouraging readers to join a privacy-focused community and use products that protect their online identity.

Transcript

Here’s a very scary scenario to consider. I will call it for now, Crowdsourced Facial Recognition. This is where every device around you is a slave and is trained to do facial recognition. And if you’re found by one of these devices, you get reported to HQ, which in turn gets you reported to the government or whoever your threat is. This is where the technology is headed. What I’ll be talking about is a technology that already exists, whether it will be used for mass searching of people and faces remains to be seen. But it can’t do it. They just have to flip the switch.

And maybe they’ve already flipped the switch. And that to me is so disgusting and unreal. Apple raised the ante by first developing this technology and Google followed. Amazon has their own implementation, which adds to the risk. What we’re talking about is the ability to find anyone, even if the person is offline or not even connected to the Internet. All because of surveillance capability being built into every mobile device and IoT device. Think of them as obedient robots that follow their master. We all agree to this because we don’t speak out when new tech is introduced or we believe in the sales hype.

Well, this one was promoted as having good intentions for society. But the actual end result of the technology is one that will render privacy and freedom pretty much impossible. This is like the book 1984 where there’s always a camera watching you. If you want to learn more, stay right there. This is a story I’ve discussed in past videos, but I will retell the story with a new twist, a new way to use this tech beyond what I’ve previously described and specifically focused on the capability for crowdsourced facial recognition. A couple of years ago, Apple introduced a feature which they claim was designed to protect children.

It was a feature where the iOS devices can spot illegal material called CSAM. I can’t even define CSAM for you because it will cause the YouTube censoring algorithms to go bonkers. But basically, it’s illegal photos relating to children. Apple announced that it would generate a database of suspect photos and then create signatures to recognize them. When such photos are auto detected on your iOS device, then you will get reported to Apple HQ and then presumably this gets referred to law enforcement. At first, it sounds completely benign and not worth your deep attention. What was never completely understood here is what technology allows these particular photos to be recognized and what exactly is the phone able to recognize in a photo.

And this is where we will go deeper into the rabbit hole. As it turns out, we have a much more sophisticated tool here for surveillance than most people have ever imagined. This plan to detect CSAM specifically as announced in the press never got implemented as described because Apple got pushed back from parties concerned about privacy. Apple put this project on hold. Now, my wording here is very specific. Implementation of CSAM detection is on hold. But the technology for doing this has been placed in every iOS device since iOS 13, I believe. And I hear it is also now implemented on Mac OS.

The very deep question I have is what made Apple create this technology to begin with? It is hard to believe that this whole sophisticated architecture was specifically designed from the beginning to focus on such a narrow area as CSAM. This is very suspect because what you will discover is a surveillance architecture that is perhaps the most sophisticated that’s ever been made. So if this is such a powerful technology, did Apple really put this on hold? Surprise, surprise, the API or programming interface to implement this feature remains active on the OS. What exactly is this technology and what does it have to do with crowdsource facial recognition? Let me first explain this technology in detail and you will begin to understand its power.

To start with, we need to talk about the AI chip on the iPhone which was introduced first on the iPhone X. Apple call this the neural engine. The AI chip is specifically geared towards image processing, meaning the chip is optimized to do image scanning but in a much more advanced way. This chip can do facial recognition which includes the infrared or IR capability used on Face ID on iOS, but it is not limited to that. By the way, a similar chip was introduced on Google Pixel since Pixel 6, so this is not limited to Apple. If you have no knowledge of this AI image scanning technology, let’s start with a common example of a pre-existing technology that we can find on YouTube.

Each time you upload a video to YouTube, the YouTube AI will scan each video frame by frame or photo by photo and look for content that goes against YouTube’s terms of service. For example, the AI looks for children or weapons or sexual innuendo or different levels of undress. This is so specific that the AI will also discover if there are any objects in the video that can affect its advertising strategy. The AI could identify cans of Coke in the video and perhaps not run a Pepsi ad on that content. Words and objects in the photo, like maybe a coffee cup with some censored text on there, would be character recognized.

But here is where it gets more sophisticated. The AI detects the faces. It will analyze not just the face for recognition, but also can describe the face by age or gender or even expression. The AI could tell if you’re happy, sad, anxious or afraid. It could understand the behaviors of multiple people in the photo, such as acts of violence being performed. If you’ve had the impression that YouTube sensors are actual people that watch every video and flex questionable content manually, you’ll have to retrain your brain here. This is all done by a big Google server AI that scans every video being uploaded.

Now let’s go back to the AI chips on the phones. The YouTube AI is being handled by a big giant Google server, as I said. The AI chips on phones can perform the image scanning as well, but now embedded into the phone itself. The offloading of this kind of processing means that the AI need not be connected to a central AI server. Each phone can be an image processing robot. Now Apple expanded the neural engine capability to do something beyond image processing. And this is the slave robot portion of the technology. Apple created a system for sending remote instructions to the AI chip.

The AI chip can then run locally and do local image scanning. And then the AI chip will report back to Apple only if a match is found. There are many ingredients here that are interesting. First of all, the instructions to the chip are specifically intended for image scanning and are very brief. For example, the AI could be told to search for a specific person, that is in some specific situation. This is how they would have described those illegal CSAM photos they would have in their database. Now here’s the advanced capability. The remote instructions to the AI are hidden and mathematically obscured by hashing.

So no readable instructions become apparent to any of us. The fact will be that the phone AI will receive remote instructions in a secret way. And the phone will raise an alarm to HQ only if the AI finds a match. So this will be very stealthy. The effect of this technical design is that thousands of search instructions could be sent to every phone in very small files. These would be encrypted. This would go to maybe a billion phones, but no internet traffic is generated. The instructions could be sent infrequently. Only those few phones that have a match will respond.

What is being matched here? It’s the photos on the phone. It’s basically the capability of the phone AI to do scanning of your content. The technical term for this is client-side scanning. Now to add to the capabilities of this technology, let me remind you that Apple now as well as Google can make use of the BLE Mesh Network. The Bluetooth Low Energy Network I talked about in a recent video. The end result is a very efficient way of giving instructions to AI chips on every phone. Meaning if they wish to, the AI instructions could be distributed via Bluetooth BLE.

This means it will be possible to give instructions to phones without an internet connection or phones without a SIM card or phones that are turned off but running BLE as is now the case with iPhones and Pixel 8. A very efficient way. This AI chips will follow instructions given by HQ and then send a report back only if there’s a match. If there’s no match, no traffic. So theoretically, it could do bi-directional traffic that would be hard to beat even on phones you think are disabled. This kind of technology cannot be stopped by disconnecting from the internet or removing the SIM card.

Even without Wi-Fi or cell data, the phone can send a signal to another iOS device within 600 feet and get the data reported to Apple. I mentioned earlier that Apple suspended that CSAM implementation, but I want to remind you here of a Louis Rossman video from last year where you will see that some cybersecurity researchers were able to intercept API calls from the file manager in iOS where clearly the API calls doing image scanning were performed. Thus, while the specific application for CSAM may not be an operation, there’s no reason why this may not be repurposed for some other objective.

So this image scanning is active today. There are many possible uses of this AI technology that can scan local device content based on instructions from an HQ. But for this video, I’ll focus on just one. To highlight this point, I will talk about the opportunity I had to have a conversation with John McAfee when he was alive. One of the things I discussed with him was his approach of announcing his presence at some location on Twitter. One time he posted a photo of himself out in a busy area in Germany and then he posted another picture where he was in a very recognizable restaurant nearby.

I asked him if he was worried about being found at these places by the three-letter agencies he’s trying to evade since he’s advertising his location, basically. But of course, I knew what his answer was going to be. Though it was nice to have him verify it, John McAfee said to me that basically he showed a location where he was apparently two weeks before. So he had a strategy and he thought he was safe. But this did not always work out. In another case, he was spotted at a restaurant in a small town in northern Iceland. This was clearly a very small town and I’m sure everyone knew each other.

So he would certainly stick out. He was apparently hiding out upstairs in a restaurant in this space which he claimed was electronically protected from radio emissions, basically like a Faraday cage. But in this case, someone did recognize him and posted the location on Twitter and McAfee had to scramble to leave Iceland. I believe he went to the UK post haste. Just generally though, based on the technology at the time, the fear that I had was that some Facebook user would accidentally take a picture of him in the background in one of these restaurants and the Facebook AI would find him once a photo was posted on Facebook.

This is something that the Facebook AI did frequently by tagging all your friends in your photos. In this new scenario with this newer technology, people taking random pictures need not notice that a John McAfee was in the background. And nothing needs to be posted to social media. This is the big difference here. In fact, no action needs to be taken as the phone AI would immediately recognize via facial recognition that John was in some photo taken by some person and the XF data, EXIF metadata of that photo would reveal the GPS coordinates and timestamp. Again, the target could just be in the background by accident.

In popular tourist spots, the likelihood of being in the background could be quite high. So big tech at the behest of some three-letter agencies could then be asked to send these AI instructions to look for a particular person and have that reported back to HQ together with the GPS and timestamp of the event. Apply to the common man if, let’s say, you’re trying to hide from some violent criminals, maybe some drug lord, and this drug lord then pays some insider at Apple or Google to go do a search for you. Well, the fact that you can’t avoid being in the presence of mobile phones ensures that you cannot escape.

You will eventually be found and the drug lord will get you. Here’s a new wrinkle to this that we need to add as a capability. As I discussed, the client-side scanning supposedly scans for photos and videos. But what is viewed by the camera may not yet be a photo or video. However, what is the actual technical difference? Obviously, some API can capture what’s seen on the camera and the AI could sense that and process that. In fact, when you do face ID, that AI scan has nothing to do with stored photos. So this is the next level of my fear.

This is when the presence of cameras on phones, even when not actively taking photos, could in fact be answering to the instructions of HQ. This is the next level, the AI actually monitoring the camera. Is it possible? Yes, there is no technical difference here since capturing stills from the camera is no different than saving it to your photo library. This is now the stage of 1984 where Big Brother has a camera watching everyone. The only difference here is that it’s the AI watching you. By the way, I don’t want to limit this discussion to phones. It is already a known fact that Ring cameras supplied by Amazon are capturing the videos made by these devices and storing them in a central database at Amazon AWS and sharing this with law enforcement.

Amazon AWS also has been actively training a facial recognition AI, which in fact, any one of you can rent for use. So here we have the expanded case of not just phones, but all devices with a camera having code meant to detect faces and have that reported to some big tech HQ. Currently Amazon uses a central AI to do the processing, but it wouldn’t be too hard to integrate facial recognition directly into the Ring cameras using a similar technology to Apple. Some rudimentary facial recognition is already being done by several security camera companies and I’ve already played around with that.

I want to address another matter relating to facial recognition. Some people left a comment in my last video saying that we now have to fear IR facial recognition. Infrared. Actually, this is not what I’m worried about. Although phones can use IR to do face ID, the training of massive databases for facial recognition are focused on bitmap 2D photos. IR analysis for face ID is not really an AI kind of threat as there’s no large volumes of IR facial data for machine learning. So though facial recognition via IR are special use cases, usually for use of faces for security, this is not the same privacy threat for now.

This my friends is what is going to make the choice to disappear a near impossibility in the near future. Remember that in addition to phones and security cameras, we have cars actively collecting photo information constantly. The new AI method of surveillance is very efficient. In the old days, the internet traffic generated by constant passing and storing of every photo ever taken into a central database would be massive. For example, China’s facial recognition infrastructure at every intersection relies likely on centralized servers collecting videos continuously. Very resource intensive. But using a local device AI to find people means there’s hardly any internet traffic.

Technologically speaking, this is actually amazing when you think of what these devices are capable of. However, this same sophistication of technology is a collar around our neck because this means we are forever shackled. Our devices are now slave devices. Big Tech is their master. They can be obedient servants to follow the surveillance needs of their master, even though supposedly these phones and devices are owned by you. That’s the big gotcha here. You buy the devices and they use these same devices to enslave you using your own money. Brilliant. And you cannot escape this because the devices that will track you are not even yours.

They’re owned by the sheep around you. Friends, the news is continuously being tightened to control us with the absolute elimination of our privacy. We still have the ability to defend ourselves though. I created a community on the app Braxme where we now have over 100,000 active users. Join us there and share advice and ideas with others who are also interested in privacy and security. On that app, I have a store with several products that will be key to defending ourselves by obfuscating your identity online. We have the Simfree Brax virtual phone which allows you to have an identity free phone number.

We have the Google phones which are identity free phones. We have the Braxme product which has no identifiable metadata. Our bytes VPN service will obscure your IP address and more. Please subscribe so you’ll learn about the complex problems we face. Unfortunately, it cannot be learned in one video. Thank you for watching and see you next time.

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

BA WORRIED ABOUT 5G FB BANNER 728X90

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!


SPREAD THE WORD

Tags

AI technology privacy risks Amazon facial recognition system Apple's facial recognition feature crowdsourced facial recognition technology facial recognition in background photos facial recognition privacy concerns Google facial recognition technology joining privacy-focused communities mass surveillance risks misuse of facial recognition technology mobile device facial recognition offline facial recognition products for online protecting online identity security camera facial recognition

Leave a Reply

Your email address will not be published. Required fields are marked *