Summary
Transcript
I hate to be a bearer of bad news with I told you so videos, but this is something I’ve been warning about in multiple videos since last year. Long-term technological trends are often guided by powerful entities, and in this current world it means the three-letter agencies or the surveillance complex. As usual, they’ve gotten their way. They won. You’ve all been fooled. The technological changes that will be pretty much global by the end of 2024 will provide the ability to scan all content on your devices pre-encryption. The result of this is that there will no longer be any security or privacy in end-to-end encryption using standard devices.
A new way of encrypted communications would have to be invented, and they didn’t even break the encryption math. I’ve given you the history of why the powerful surveillance complex wants this in prior videos, and the tech giants have complied. Oh, I’m going to hear the fanboys of the various big tech companies saying there’s no problem. They’ll say that I’m just a fear monger. But history will eventually prove them wrong. My long-held theory is that if a technology is created that has the potential for some evil use, then that evil use will occur, regardless of what they say the good intent is.
In this case, the evil that will be perpetrated in all new devices by the end of 2024 is the local AI and the NPU chip. This will spell the end of end-to-end encryption using typical apps like Signal, Telegram, and WhatsApp. But at least I have an idea of how to get around this, which I will discuss later on. If you want to learn more about this new change in computers called the NPU chip and how it will end end-to-end encryption as we know it, stay right there. Okay, terminology time. The term NPU means neural processing unit.
Basically, this is an AI chip that is becoming part of all new devices by the end of 2024. Currently, the NPU is already found in all new iOS devices. It is called the Apple Neural Engine and had the initial moniker of A4 Bionic, with new versions following that. There’s a Mac version known as Apple Silicon M, which are the ARM versions of the Mac. Google has the Tensor chip, which premiered in the Google Pixel 6. Just this month, Microsoft announced the CoPilot Plus PCs, which are NPU-equipped Windows machines. These are running Snapdragon X Elite chip from Qualcomm, the first ARM chips to run standard Windows.
But Intel and AMD are beta testing their NPU models, and these are expected to be released by the end of the year. The Intel chip is known as Lunar Lake. So basically, if you’re getting a new computer and you will be running iOS, Mac OS, Windows, Android, then whether you like it or not, you will now have the luxury of having these AI chips or NPUs. And these operating systems will be using these NPUs. Wonderful, right? And so with that normal computer lifecycle, this means that in five years, the majority of computing devices in the world will be NPU-powered.
Obviously, my focus today is on this NPU chip and what its effect is on E2E and privacy. Well, that is one problem with it, and there’s more as I explained in more recent videos. But I’ll focus specifically on the NPU this time. I’m sorry for being repetitive in some of my content, but some people will not have seen my prior videos and have to explain the source of the danger using a brief history. History adds the context, and if you pay close attention, it will be an indicator of future actions. Back in 2015, there was this incident caused by two terrorists who killed 14 people in San Bernardino, California.
These terrorists were later killed by the police after a car chase. And the motives and history of why they did what they did remained a mystery. The only thing left to possibly explain the reason for the attack was in their iPhones. Apple was asked by the FBI to break into the phones to retrieve the information in it, and Apple refused. Apple was heavily pressured by many people to give in and break the encryption. And this included pressure from tech billionaire Bill Gates, by the way. I think Apple realized that their marketing of iPhones would be severely impacted if they ever revealed that they could break into an iPhone, so I can see why they would not let that happen.
Then shortly after that, various heads of the three-letter agencies took to the press about the new encryption problem and how it prevents their organizations from doing surveillance for which they state it’s for the safety of the people, and they demanded a solution. Though some foolish politicians who don’t know anything about encryption started making stupid laws that didn’t do anything, I’m talking about Australia here, the heads of these agencies started touting their smart solution. They said the solution was scanning content pre-encryption. For example, if someone could watch your screen as you type on a signal app, then the signal encryption doesn’t matter.
You basically bypass E2E if someone can see the screen. If someone is watching over your shoulder as you use any messaging app, then of course all that you’re doing will be known. There are many ways of doing this kind of surveillance of your device that will be equivalent to having someone watch over your shoulder. The easiest way is called key logging, where every keystroke, mouse click, or finger tap is recorded. The other way is to simply take screenshots of what you are doing on your screen, and the information will then be available.
Another way is to intercept all text rendering being done on a GPU and intercept the text data before it is turned into a graphical font. The limitation of the screenshot method is that it takes up a lot of space, and graphical data by itself is not easily categorized by traditional computing devices. Enter the NPU. The NPU solves the problem of analyzing graphical images with an image processing AI model. The AI features of the NPU enable the operating system to understand what’s in an image. An AI could identify each object, each person, know what’s happening in the picture, to such detail that it can isolate each object and remove that object from a photo, for example.
These are common tricks now you can do with photo editing AI on fonts. The AI could do character recognition and also facial recognition in the graphical image. Image processing AI has been around a while, but what’s new is that it is now embedded in all these new devices before it had to be sent to the cloud to be processed by the big giant tech servers. Well, this new technology was leveraged for surveillance as first introduced by Apple. A couple of years ago, Apple announced that they would scan iOS devices, meaning do on device scanning, supposedly looking for illegal photo content called CSAM.
Why this was considered a sudden priority was never entirely clear. They came up with various explanations of how this would be safe because the neural engine on iOS would do the analysis and not some human. This was the new solution. Use the AI to surveil the phones. They found a way to have someone watch over your shoulder, but the marketing savior here for Apple is that they can say no human is involved. This now is the cover that Big Tech needed. They satisfy the requirements of the surveillance complex, but they can claim that no human is doing the spying directly, although they will be doing it indirectly.
Last year, the EU, UK and US tried to pass laws to put this concept of device scanning as a required tool to supposedly save our children from illegal photos like CSAM. Again, this was some new priority. Only the UK managed to actually pass the laws, but this simultaneous support for local device scanning by the Western countries was eerie, like it was suspiciously pre-planned and coordinated. Then suddenly here we are in 2024 and all the holes have been patched in the infrastructure. From here on, all devices will have the technology to allow the AI to scan devices for content.
I know that the doubting Thomas will accuse me of fear-mongering as usual, though I end up being right historically. But let’s be clear, the hardware is there to put someone watching over your shoulder on every new device. Apple already proved this. The question is if there will be software to enable this. So let’s look at the current evidence. We’ve already discussed that Apple proved the concept doable when it announced that it can scan for CSAM. Then Apple suspended the project. They said. However, there is no doubt Apple devices are scanning all photo content and creating a database of what’s in your phone.
This process on an Apple Silicon Mac was found to be a program called Media Analysis D and it is always running and cannot be stopped. No one currently has proved that this photo data is being sent elsewhere, but there is no doubt that the content of the analysis of images is on the phone. They call this neural hashes. I’ve said multiple times that it would be trivial for Apple to put instructions to scan for any content they want, including facial recognition and do it surreptitiously on any group of Apple devices. So whether it is being done or not, the capability is very easy to do.
The data is already there. Then in the last month, Windows made the mother of all NPU announcements. In Windows 11, key logging and screenshots of what you are doing were enabled as standard events. And the NPU, if available, will then analyze the screenshots for content and that data stored historically. Thus, the actual mechanisms for scanning pre-encryption are now in place, a complete infrastructure. The only thing missing here is the command and control structure to query your device from an external party. So far, the Windows case is much more elaborate than even the Apple infrastructure.
Windows recording everything you do on a Windows computer. Windows on a co-pilot plus PC taking screenshots every five seconds. Then the NPU analyzes what you are doing and stores that data in a database of your activity history in your computer and refers to this as Windows Recall. By the way, I hear that Microsoft was supposedly pulling back on Windows Recall, but this is a beta product. So expect them to continue to work on it. Someone, for example, saw data from Recall that was not encrypted. So they could be working on hiding all this with a little encryption.
Do you see the difference here? Microsoft is capturing everything and they’re not denying it. Apple has so far only been shown to scan photos. So the Apple data is not complete enough to have someone watch over your shoulder. The Microsoft case is clearly very complete. And remember that Bill Gates was all for breaking into encryption. To complete the surveillance structure for watching over your shoulder, the co-pilot AI, which is really just a private version of OpenAI chat GPT, or a large language model, is now preloaded on all Windows 11 machines. And apparently this co-pilot AI can scan your Windows Recall to see what has happened on your Windows timeline and use this data as part of its intelligence.
Except that co-pilot is in the cloud. It is an external server with new ability to analyze your own personal history in your machine. It is not entirely clear if the NPU can do some LLM functions like chat GPT locally on your device, or if it does not need to send data to the cloud. The newly announced Apple Intelligence from Apple is claimed to have some of the LLM functionality provided by the neural engine on the newer chips. But the line is vague. Is it possible for the connected AI to scan your content and report it? Is it possible for a two way communication so that the AI can be commanded to read your device content and analyze it? Well again a trivial task here since the AI on the servers of Microsoft, Apple, and Google would obviously know what interactions it has with any device.
So the answer is that it is definitely possible and easy. Thus if some three-letter agencies say that in the interest of national security they need access to find people with certain device content, could that be done? Well obviously the answer is yes. And your device currently on Windows would have the key logging and screenshot content to see what you’ve typed on apps like Signal, Telegram, and WhatsApp. In other words there’s no point to using end-to-end encrypted apps. That guarantee of absolute security and privacy is gone. Some of you will have a quick solution I’m sure.
Hey why not use Linux? Linux will not have this NPU crap or local device content scanning. Well that’s true but here’s the problem. In order to guarantee security both parties need to be on safe hardware and a safe OS. In other words you’re left with safety only with Linux to Linux communications. Unfortunately we don’t even know what devices your contacts are using. More than likely they are using mobile phones which will naturally have this device scanning features. Crazy right? Now in the future how would I get around this? Well it still would be possible to do E2E because no one broke the encryption math.
But it would be more complex. It would need some new hardware. It would also require apps to detect if the recipient of the message is using the allowed hardware. This is the only way to circumvent having someone watch from their shoulder. Basically similar to what is used in cryptocurrency offline walls like Trasor. You will need to have a device that is not connected to the internet. It will need a screen and keyboard of course or be phone-like. Then this could run an app to retrieve messages by plugging into an existing device like another computer or phone.
Then offline you can decrypt the message on this device. You can then pre-encrypt a response and then have it sent over your main device after you connect. The E2E encryption keys will only be on the offline device. Again for this to work, the hardware has to be validated as being certified for this use. This may require this type of hardware to have security chips of their own to enforce this verification. The goal here is to have no unencrypted text ever to be made visible on tainted devices like Windows, iOS, macOS or Google Android.
All the encryption and UI would be done on the external device. It’s going to be tedious of course and not as convenient so people who cannot handle the change will continue to use the current schemes like Signal Telegram and WhatsApp. The problem with a complex solution like what I describe is that the vast majority of people will resist the inconvenience. The result of which is that people will just accept that someone will be watching and unfortunately carry on with life. This is the same today with SMS which is a common way of communications.
Yet a copy of this is retained by the carriers and three-letter agencies. Everything you’ve ever said in a text in the hands of the surveillance complex. We have been trapped now and the solutions are not going to be easy. The infrastructure isn’t placed but perhaps not yet complete. I’m talking months not not years. Likely an AI will be watching everything we do on our devices and this is just the beginning. The AI can be watching us physically later on everywhere. As someone on x posted about AI this is the most exciting times in the world.
Yep. Folks I started a company with the focus of helping us all with the control of our privacy. To allow this I created a community of privacy oriented people in my app Braxme. There are over 100,000 users there discussing privacy issues like what I’m discussing today. Join us there and be part of the community. On that site we have a store with products that are tools for our protection and those products support this channel. We have the Brax virtual phone product which allows for identity-free phone numbers. We have the Google phones that can make you invisible to big tech.
We have an identity-free Brax mail service that hides metadata from others. We have a BICE VPN service and Brax routers that protect your identity in your IP addresses. All these are in our store on Braxme. Thank you for watching and see you next time. [tr:trw].