How to Protect Yourself from AI by Understanding THIS

SPREAD THE WORD

BA WORRIED ABOUT 5G FB BANNER 728X90


Summary

➡ The article discusses the potential risks and benefits of Artificial Intelligence (AI). It explains that AI itself isn’t harmful, but the way it’s used can be. The article also clarifies that AI doesn’t collect or store personal data, but the software programs that feed data to the AI can pose a threat. It emphasizes the importance of understanding AI and its components to avoid misuse and potential dangers.
➡ AI systems, like Apple Intelligence, Windows Copilot, or Google Gemini, learn about you by collecting data from your activities and using it as context to answer your questions. However, this data can also be used to bias the AI’s responses or even to surveil you. While some AI systems claim to protect your identity, the fact that your personal data leaves your device and is stored elsewhere can be a cause for concern. To avoid these risks, it’s safer to use open-source AI systems that operate without outside agents and don’t collect your personal data.
➡ Ollama AI is a website where you can use many open-source AI models, including llama 3 from Meta, which is safe as it doesn’t collect your data. However, using AI models within apps like Facebook can be risky as they can access your personal activity. Subscription-based AIs are generally safe, but be cautious of potential bias in the information they provide. When choosing a device, consider if it can run an OS without embedded AI, like Linux, to avoid AI spyware. Currently, new devices from Intel, AMD, and Qualcomm don’t support Linux, but M3 MacBooks do. iPhones, Samsung, and LG phones can’t separate from AI, but Google Pixels can replace their OS with a safer one. The creator also offers products like an AI-free phone and privacy services on their platform, Brax Me.

Transcript

I don’t want to be the bearer of this information, so I need to answer this question for you. In light of my recent videos about Windows Copilot, Apple Intelligence and Google Gemini, I talked about the pretty severe dangers these AI modules pose. And I’m sure some of you have become completely turned off by the prospect of using AI, but this is not the complete story. There are nuances here that you need to understand. AI in itself is not necessarily a bad thing. It is how they are used and who is controlling them. Who is pulling the strings.

Some of you panic because you find that every single new computer has a built in NPU chip or neural processing unit. Thus it must have that AI spyware on there. Do you need to panic? Make no mistake, if you don’t understand what you are dealing with on AI, then it is clear that you’re selling yourself to the devil, so to speak. What I will teach you is that the AI actually has very specific parts and requires very specific data as input. And if you don’t understand this, you just react out of fear instead of a controlled reaction based on knowledge.

This will not be a technically detailed video. I do have other videos in this AI series that are heavy on the tech explanations, but this is meant to give you a good understanding of the environment so you don’t move around blindly. Some AI is good to use, some are not. If you want to know the difference, stay right there. I’ve done several videos explaining AI in great detail, including both describing AI as a threat and even exactly how AI works. In order to explain all this to you, I actually installed an open source AI on my computer.

This is actually very easy to do, as explained in my past videos, but having an AI right on my machine allowed me to show you what it actually does and what it can do. So to make it clear for you and to clear out the haze surrounding this new technology, I will break out the pieces that make this whole AI thing work. And then from this breakdown you will then begin to understand what it can do or cannot do, depending on the environment. For this particular discussion, we will be talking about a specific type of AI called LLMs or large language models.

These are the types of AI that you can actually talk to conversationally. There are specialized AI type modules that exist in various software, like those that do facial recognition, voice recognition, image processing and so on. That kind of AI is made for specific functions and for the sake of this video, think of them as being commanded by a higher level AI, which is the LLM. Now this is a Very simple fact. AI by itself is made up of two parts. First, there is a software that manages the AI itself. This software is called the transformer. The transformer takes a prompt to the AI and performs computations which then generates the results.

So the transformer is the mathematician, so to speak. You can actually see the source code for a typical transformer using open source models and it is built into a local AI called Olama AI. This is the AI installed on my computer. Being a programmer, I can actually examine the source code of the transformer architecture and I actually describe the steps taken by the software to process any kind of AI instruction. The second part of the AI is the AI model itself, which is really the learned data. The model can be thought of as just a very large multidimensional table, or the programming term is array.

The typical open source model at the low end is 4 gigabytes in size. To give you an idea, a full model used by OpenAI or those that run Windows Copilot are huge and are probably close to a terabyte in size and likely growing larger. So the data of the model is really just a fixed collection of numbers in a fixed file. As a regular user, you can’t change what’s in the model after it’s done the learning, it’s fixed. This is why AI models are not good sources of information about current events, since they’re based on data from when the model was created.

The point of this explanation is by itself the model does not collect your data and is not accumulating new knowledge directly. That can only happen by teaching the AI again in another separate process. So to summarize, an LLMAI is transformer plus the model and it is fixed. If you install an open source AI like Ollama on your local computer, it will run without any Internet and you can firewall it all you want and it will by itself not communicate with anything. What introduces dangers to an AI are the little software programs that feed data to the AI and then act on the responses of the AI.

This folks, is where the real danger is. If you install your own local AI, you can talk to the AI using the built in chatbot, but other than that, the AI itself will not do anything outside of the chatbot. What makes it dangerous is that you do not know that there are little software modules called agents that have been loaded on machine and these agents are meant to ask the AI to evaluate data in some chatbot format and then based on the response of the AI, the agent will perform some task. For example, an agent on Windows would be the Windows Recall module.

Windows Recall will create screenshots, then it will pass the screenshot data to the AI and the AI will evaluate the photo. Windows Recall then stores that response for future reference. So who’s the spy here? As you can see, it is not the AI itself. It was the Windows recall agent. In iOS and macros, we have the Media Analysis agent. Now, what does this agent do? It is typically linked to the file manager. So when some file is added, the agent is invoked and the agent then has the AI evaluate the photos and then stores the results for future reference.

So again, here the agent is the culprit. Thus, the full AI architecture so far is transformer software plus model plus agents. Now, many of you are afraid of new devices that have an NPU or Neural Processing Unit. NPUs have existed on iPhones since iPhone X. I believe it is also on all the M Silicon Macs. It was incorporated to some degree in 2023. Chips from intel and AMD. But all new computers and new phones will have the most powerful npus starting in fall 2024. So these are just coming out now. So what does the NPU do? Well, if you look at what the transformer software that runs the AI does, it’s mostly doing matrix multiplication.

This is a specific kind of math. Basically, it’s multiplying tables against tables because AI multiplies large tables of numbers against other large tables. This is very slow to do on a machine not designed to do this math. But on an npu, this is all super fast because the NPU is a single process accelerator. Really, it is a math accelerator specifically for multiplication. So guess what? The NPU itself does not have any brains. The fact that it exists doesn’t necessarily mean there’s danger. In fact, if you’ve listened so far, even having an AI with an NPU itself isn’t a danger, because an isolated AI talking to an isolated NPU is a static thing.

So here’s the structure so far. Transformer software plus model plus agents. And then the NPU is used by the transformer software. I did a video some months ago where I duplicated what Windows Recall does. If you’re a techie type, you can watch that video. Video. I also duplicated what a media analysis D from Apple would do. And this is really where you have to understand the problem. The problem has to do with agents. Windows Recall is an agent. Media Analysis D is an agent. In AI parlance, it’s the agent that actually does some action using the AI as its Guide.

And there lies the problem. If your machine has an AI in it and there are agents put in there outside your control, then you don’t have any idea what the agent is doing. That’s what’s dangerous in the case of Apple with iOS, Windows from Microsoft and Android from Google. Because they control the operating system, they’ve injected various agents on the machines. These agents are operating in the background and are actually part of the user interface. So the architecture so far, Transformer software plus model plus agents and the transformer software is using the npu, but the agents are controlled by someone externally.

As I explained earlier, AI agents like Windows Recall and Media Analysis D are continuously collecting data about what you see on your screen or what’s stored as media files. So these agents are basically collecting your private data. This now will show you where the danger lies. The mere collection of your private data, which is in text form, summarizing what you’ve been doing or summarizing what you’ve been looking at, is ultra dangerous, even at this point. Why? Because if someone can read the collection of data, they have basically made a profile of you. I described this in another video as the same, as a court transcriber that documents everything going on in court.

Well, this is what’s happening now on the new OSS with these AI agents. I’ve only talked about Windows Recall and Media Analysis D from Apple. But imagine many other agents collecting data and not initially connected to the AI, such as sensors tracking location history or Internet browsing history and so on. These are part of your profile. Now. The way AI is intended to work, it needs some sort of prequel data, so to speak. If an AI is intended to work as a personal assistant, which is the goal of Apple Intelligence, Windows Copilot or Google Gemini, then it has to know you well.

But I said earlier that AI models are static. They cannot learn after creation, they are fixed. So how can it learn about you then? Well, it does that by pushing the summary of your life into the AI. As a prequel to any instruction, you might ask the AI. I’ve demonstrated this as well in the AI videos I made. The prequel data is called the AI context. So the way the AI will have to work is that it will feed your personal profile, which is a summary of what you’ve done on your machine, to the AI. Ahead of asking a question of the AI, let’s say you’re a car enthusiast and the AI agents record all that data and stored it on your device.

These AI agents are also embedded in the chatbot. So it knows to upload your profile in advance. Then you will ask the question, is a Porsche 911 better than a Tesla Model S? Well, the history of your Internet actions would show that you’ve been watching a lot of EV videos on YouTube. So it will then be fed as prequel data to the AI and the AI will come up with a response based on your preferences. In this case, it will likely bias its response knowing that you’re an EV fan and thus recommend the Tesla Model S.

So the way you feed an AI is context plus prompt and then the AI will process that and create a result. Now this is something I have to explain to you as well, because this is something easy to try if you ask the AI to give an opinion, let’s say on politics, the AI itself has a training bias. That’s pretty much a given. And Even companies like OpenAI openly state that they detect that there’s a bias based on the source materials that are the basis for training. This should be obvious since a big source of material are peer reviewed papers from universities.

And of course it is a proven fact that professors tend to be biased towards one political side. But beyond that, can you bias an AI after learning? So the AI is already a fixed model, can you change the bias after the fact? And unfortunately this is extremely easy to do because the AI agent that sends your questions to the AI can not only send your personal profile as context, it can also send a bias as a context. In fact, some of this is built into many models as censorship and is done after the learning is complete but before the model is made public.

But even after that a bias can be sent as a context. So AI is not some all knowing brain. It requires some data as input. And look at this chart as an example of what an Apple Intelligence or Windows Copilot might be doing. Prompt context 1 personal data context 2 bias context and then it gives you the answer. In this kind of structure, the problem is that if the agent is controlled by someone else, someone can control the context. It’s not between you and the AI alone. This is very easy to duplicate. If you’ve tried to do what I’ve done and install an open source AI that you can run for yourself, there will be no context other than the context you supply it.

I made a chatbot where I passed the context information before I ask any questions and the chatbot becomes seemingly more intelligent. Sometimes a general context is being added to the public AIs like Metis Llama for example. In the press, someone asked the AI about the attempted Trump assassination. And the AI, of course, is trained on old data and responded that there was no assassination attempt. Politicians, of course, berated Zuckerberg for this, but although I have no love for meta, this is the expected behavior of the AI. To solve this, all they have to do is pass the context of the Trump assassination with every AI question related to politics and thus current events are embedded into the response.

This is just an example. The point here though, is that some external entity can interject content to the AI, and this can be a case of embedding a bias context. Now, this is really important to understand. I’ve already said that your personal profile has to be provided to the AI so that the AI can act as your personal assistant. And in programming terms, it means it has to pass your personal data summary as compiled by a machine as input to the AI. So if you ask your local AI a question and it reads your personal profile as a context, then one can assume it’s safe in this instance, right? Okay, there’s a flaw in here.

I will describe later, but let’s just assume that’s the case for now. But what happens when the local AI sends your data to the cloud AI for a more complex response? Let’s say you want the AI to do something more complex, like write an essay about some political topic. Well, isn’t it safe to assume that the agent in the chatbot, realizing this is a more complicated question, will then retransmit this request to the Microsoft, Apple or Google AI servers? This is one of the dangers of the AI, where it cannot be assumed that your data will never leave your device because otherwise the cloud AI server will not get any context.

Now, it can also store context historically, so beware of that possibility as well. Oh yeah, I’m sure they’ll say that your identity is hidden from the cloud AI. It may have your data, but they will strip out identity, if you can believe this. But of course, it needs identity to store the context for future reference. Still, the idea of this data leaving your machine with all your detailed interactions on your machine, all the websites visited, all the comments said on the Internet, all the questions asked of the AI, all your search activity, all your media and screenshots summarized, isn’t that a cause for worry? And this is the scary part that no one talks about.

Apple already said it could do this. The AI can be interrogated to see if you’re violating the law by having illegal CSAM photos on your device. While they claim that this project has been suspended, it doesn’t minimize the Fact that AI itself can be interrogated to judge your device content and Apple acknowledge that they can do this. If the AI is asked by the AI creator to find a person with a specific range of activities and locations and profile of behavior, wouldn’t the AI be able to spot you and report you to hq? Well, this is a no brainer.

In fact, there wouldn’t be any technical limitation that prevents this. This is the surveillance nightmare I was talking about. So this is really emphasizing the fact that certain AI can be used to surveil you. This is really the very solid danger to embedded AI in your most common devices where you do not control the agents in IT and where personal context is being collected. Now I’ve suggested the answer to this from my previous explanations. Is there a good AI or bad AI? Well, it depends on what you define as good or bad. If we’re talking about surveillance and spying, I can classify the OS embedded AI meaning Apple Intelligence, Microsoft Copilot and Google Gemini as ultra dangerous.

And that includes whatever labels they put on like Samsung Intelligence or or whatever the OEM uses as its final name. It’s actually just based on those three. Why? Because they embed hidden agents on your device and those agents act based on the intelligence it gets from AI. For this reason you will need to avoid every OS that features an embedded AI like Apple Intelligence, Windows Co Pilot or Google Gemini. But if you use Chat GPT or Perplexity or Llama or cloud based AI, is it the same thing? What about open source AI loaded on your computer? This at least is very clear.

If the AI operates without any outside agents, as it would if you install the Ollama open source AI using any model on your computer, then this has to be safe. There’s no agent running in secret. I’ve already looked at the source code. It’s perfectly safe. If you want to use an AI in what you do and you want to be safe, this is the way to do it. Ollama AI. You can go to that website. You can use many AI models that have been open sourced and surprisingly the biggest of these are llama 3 from Meta.

This is fine even though it is from Meta, heavily censored, but it cannot spy on you, it cannot collect your data. But let’s say you use llama3 from within Facebook. Now here the Facebook app can interject a context based on your Facebook activity, so that becomes dangerous. Anything that can pass a context of your personal activity is dangerous. If for example, you use Chat GPT from open AI by itself on a browser. Will it be safe? Yes, it will be safe because it is not connected to a personal profile. A lot of the subscription based AIs are all equally safe, especially if they are not LLM based like some AI is used to generate photos and videos.

Those are fine, but be careful with LLMs as a source of information based on opinions. That’s where the bias element can be introduced by the creator. This can be embedded in their public models or be snuck in by context to any query you send. So overall you don’t really need to worry specifically about hardware. Should you buy an M4 Silicon MacBook, a Windows Copilot compute computer? How about an iPhone 16 Pro or a Google Pixel 9 Pro or whatever is latest from Samsung and LG? In general you need to think of which of these devices can have embedded AI.

The answer lies in being able to replace the OS with another OS that has no embedded AI and AI agents on a computer. The answer is Linux. If Linux can be installed on the computer then you can get rid of the AI spyware. Now as of the time of this video, none of the new Lunar Lake or Arrow Lake machines from intel nor the new Ryzen chips from AMD or Qualcomm Snapdragon support Linux. So on these you have to wait. Don’t go and rush to buy one. If you can’t run Linux on them then you will have it take screenshots every few seconds of what you’re doing.

As of the M3 MacBooks these devices can actually support Asahi Linux. So fortunately these are not dead end devices. From a safety point of view on phones you cannot do anything with iPhones Samsung LGs so don’t buy these new devices as AI cannot be separated from them. However, Google Pixels can have their OS replaced with an open source de Googled OS so they can be made safe. You cannot do it yet on a Google Pixel 9 Pro as it is brand new. But in a few months this should also happen. So don’t worry about the npu, worry about the os.

Can you load a safe OS on it? And that will give you the answer. Folks, I teach you about privacy and the forces that surround me and want to crush me are the big tech companies and their minions. We make products that help sustain this channel and your help in supporting us is very important in our fight against big tech. One of the newest products we’ve made is an AI free privacy phone called Brax 3. This is now in preorder on brxtech.net it is an incredibly fairly priced phone and we’ve kept the price low for all of you affected by inflation.

Being safe doesn’t require big bucks. On my platform Brax Me, we have a good sized community of over 100,000 people where privacy issues are discussed daily and on that platform there is a store with other products. We have the Brax Virtual phone which allows you to use VoiceOver IP numbers without the need for identity to give you anonymous phone numbers. We have braxmail, which is a metadata free email service with no identity and allows unlimited aliases and many domains. We have some other phones, primarily Google Pixels that we have de googled and are available to you to start with a Safe open source OS.

On your phone we have BytesVPN, which is a service intended to hide your IP address. All these are on the store on Brags Me. Sign up there to get access. Thank you for watching and see you next time
[tr:tra].

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

Author

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

BA WORRIED ABOUT 5G FB BANNER 728X90

SPREAD THE WORD

Leave a Reply

Your email address will not be published. Required fields are marked *

How To Turn Your Savings Into Gold!

* Clicking the button will open a new tab

FREE Guide Reveals

15585

Want To Get The NEWEST Updates First?

Subscribe now to receive updates and exclusive content—enter your email below... it's free!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

Get Our

Patriot Updates

Delivered To Your

Inbox Daily

  • Real Patriot News 
  • Getting Off The Grid
  • Natural Remedies & More!

Enter your email below:

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.