RAG Privacy and the Future of AI-Powered Persuasion: A Deep Dive | Rob Braxman Tech

Categories
Posted in: News, Patriots, Rob Braxman Tech
SPREAD THE WORD

BA WORRIED ABOUT 5G FB BANNER 728X90

Summary

➡ Rob Braxman Tech talks about OpenAI’s CTO, Mira Murati, warns about the potential risks of AI models due to their ability to influence and control users. AI can be used to manipulate populations, a tactic already being used by companies like Google to alter human behavior. AI models are trained with a bias, which can be adjusted to manipulate users in a specific direction. However, it’s important to note that not everyone is susceptible to this manipulation, particularly those who are aware of these tactics.

Transcript

Here’s something interesting in the news. The OpenAI CTO Mira Murati actually openly stated that current AI models pose major risks due to their ability to persuade, influence, and control users. Now, this doesn’t surprise me at all, and I’ve made many videos warning about this kind of technology threat. But specifically, I want to show you why even current AI models can be configured to perform CIA type of missions, missions where you can manipulate populations. What I would like to do, though, is investigate this and see how this can be done. In fact, it is already being done outside the framework of AI, but now can be done more effectively with AI.

Here’s the surprising factoid, though. I will tell you that some of us will be immune to this. In fact, probably most of those who pay attention to me are already immune from this. But it will be fascinating to find out why. As part of the AI series, I’ve talked about some of the mechanics of how AI works and how it will be used. Now you will see the benefit of understanding this technology when I explain how it can be used to manipulate people. Interested? Then stay right there. In this social media age, persuasion is always the main theme.

Just to give you an example from Big Tech, Alphabet has a company called Jigsaw, which is focused on using data to alter human behavior. The first experiment at this was when they created a project to identify Islamic extremists in London. I believe this project was done in association with GCHQ, which is the NSA equivalent in the UK and was discussed by Snowden. Google, of course, can identify individuals by Google ID, something I’ve talked about extensively here. Specifically, though, they use this data to identify people with extremist points of view based on their Google searches. And once the targets have been identified, they then reformatted page one of the search results to return data that is opposite of the original search topic, which was about extremist Islamic views.

Since then, this technology has been used to target particular groups of people that Google has felt needed reeducation, perhaps as much as 50% of the population, apparently. This is pre-AI. But make no mistake, the goal has always been there by certain entities to influence people. Google, of course, is in the advertising business and persuasion and influence is the main driver of their profits. It is likely that Thomas Crook, the shooter in the attempted assassination of Donald Trump, was also influenced by social media and by the words of someone driving this man to take an extreme act.

The attempt to persuade even large populations to revolution is something that a CIA has been dabbling in historically since the Cold War. So in general, knowing what you are thinking and controlling the information you receive should be understood as a driver that can be used to manipulate a population. Now, there’s a new element to this. We now have people relying on AI as a source of information. Search engines no longer just return search results of sources. The AI and search engines actually present information to you summarized according to the analysis performed by the AI engine and now potentially with the context of knowing what you think.

Maybe this is not completely apparent to most of you, but a pre-trained AI like a massive GPT-40 or even the open source models like Llama 3 or Gemma 2 have information on all the techniques to persuade. Every academic paper on techniques for target marketing, every attempt at marketing, every public ad ever shown, and the recorded results of it are part of the training database used in machine learning. From the political side, all the common arguments and even the corresponding poll results of those arguments are part of the public record and are available for machine learning. If the particular point of view is part of the data set used in machine learning, then you can expect an AI to know about it.

On top of this, the AI companies who create these AI models also know that for whatever reason, the source data for machine learning results in a bias. Mir Morani not only acknowledges the danger in AI’s ability to persuade influence and control users, but specifically notes that open AI models have a bias. This is just from the original data. Then each company attempts to modify or enhance that bias using techniques like fine tuning, which I explained in my other AI videos. So I just want to make it clear that by default, the AI model is built using the bias of the model creator.

You can alter this bias if you control the AI. Using fine tuning, a pre-trained model can be censored bias in any direction and directed to manipulate based on what the creator wants it to do. Watch my prior AI videos in this series to understand more about how this can be done. Thus, the AI model itself is not specifically locked into a fixed point of view, even if given a fixed set of data for machine learning. The model creator has the ability to devise an AI to work in any particular way, depending on its own goals. For example, when you use a search engine for your information searches, the results of the search are controlled by the provider of that search engine.

Notably, we have Microsoft, Apple and Google and even smaller players like Brave now doing AI search results. Microsoft and Apple, for example, may be using open AI as the base model for their AI. However, they are free to manipulate the model after pre-training. So, thinking about the programmed approach to influence Islamic extremists in London, an AI could be configured to use its independent ability to persuade to try to influence that person in any particular direction. Now, I don’t want to trigger some automated censorship on this video, so I’ll give you a very benign example. It may be boring, but at least it gives us a starting point.

I gave different AI models a task. Specifically, I used Gemma 2 from Google, Llama 3 from Meta and Quinn 2 from Alibaba. The task is to create an ad for the Tesla Optimus Robot to see if it has the ability to create ads to persuade us to buy it. And here’s the prompt. You are an advertising company, create an ad for the Tesla Optimus Robot, which will sell for $20,000, less than the price of a car. I just want to see the quality of their ability to reason and persuade. Here are the results of these queries.

I’m not going to judge which one is the most effective, though it is interesting how different an ad each model generated and how it focused on completely different things. But given the task of persuasion in the ad, they surely came up with a lot of things to say, even when I didn’t provide much information in the prompt. I’m just showing this to you quickly. You can pause the video if you want to read it in more detail. Next, let’s expand our research and see what the AI has to say about how one is successful in persuasion.

Again, I will ask a general generic question and see where this takes us. What do you do to be effective in advertising? In this particular response, the AI clearly said this, know your audience inside out. And it goes to say demographics, age, gender, location, income, education, psychographics, values, beliefs, interests, lifestyle, personality traits, and so on. Using another similar question, I got a response that repeatedly used the word targeted. Anyone who studied a bit of marketing or has run a business that’s done advertising knows that success in advertising comes from a targeted message to a targeted audience.

In other words, as the first AI response stated, know your audience inside out. Now, let’s research this a bit further. It should be clear that if AI can target us personally and know us, then we could be vulnerable to what the AI says. So does the AI know who we are? I watched this very carefully. And in fact, I test AI models specifically for this. Are these models trained to know personal data? The answer is clear. As far as I can see, none of the current AI models from Google, OpenAI, Meta, and Profic, and others have focused on collecting any personal data.

Now, it doesn’t mean that these models have no personal data. Obviously, someone like me may be known to AI models because I’m a public figure. So it appears that some social media posts in wall platforms may not in itself be part of the data used to train the current batch of AI models. Let me explain why this worries me, though. Specifically, Google and Facebook have a private domain of extremely personalized data, which it already uses for targeted advertising and other methods of influencing. Google and Facebook, of course, are big in the AI business. One of the models I was using for research in this video is Gemma 2 from Google and Llama 3 from Meta.

Thus, we can state this clearly. The current batch of AI models, even from sources that have personal data, did not use personal data in training the models. Could they have used general data without identifiers to track user behavior? Yes, that is certainly a possibility as well. But in general, it would appear on the surface that AI, like Copilot, Apple Intelligence, Google Gemini, Bing, and so on, would not specifically be able to target you using a standard pre-trained model by itself. And you would think you’re safe. Actually, you’re not. And this is why I spent so much time with the earlier AI videos to actually teach you how AI works.

One of the most popular techniques used in AI inference is something called Retrieval Augmented Generation, or RAG. This technique is used because of the general knowledge deficiency of pre-trained AI models. Not only do they not have personal data, but AI generally had no access to domain-specific data. For example, it has no data on classified government documents, no private corporate data on product design, no private manufacturing data on secret military projects, no data on internal communications between people in big tech, and in theory, no access to personal data from credit agencies or government data. The training data is based on internet publicly accessible data, published academic papers, books, publications, media reports, and so on.

This is not bad. This base corpus of information is enough to give the AI the perceived intelligence. But if a company wanted to use an AI to handle product service and support, then the AI has to be given supplementary information. Using AI in this way is a legitimate and common application of AI for a business. So understand how the AI model works in this application. If a customer asks a question, which is called a prompt, the AI for this company will go through this flow. Let’s specifically say the question is, my marine air conditioner is not cooling.

A specific company like Marine HVAC, Inc. would retrieve information from their database for context and technical details for this problem, like specific parts and part numbers. Then the pre-trained AI model will use this extra information as context and then respond to the user. The retrieval of background content prior to submitting the prompt is called retrieval augmented generation, which then gives relevance to the AI response instead of hallucinating with fake information. So RAG is the common way to supplement information of the AI so it can be extremely useful without having to go through training of private data.

This is a known approach and something I discussed in detail in earlier videos on AI. The pre-trained AI model already has the intelligence. It just needs the current facts to present it. Now, how is this relevant in our personal data case? In order for AI to target you, it has to know you deeply. And how does it do that? Easy. For Facebook and Google, they can supply the AI with knowledge of you via RAG. This is why it was never necessary to give personal data to the pre-trained model. And it also allows them to do this more surreptitiously.

When you query Google search, it will likely use your Google ID to identify you and pull your profile and then respond to your search request with a very targeted result. Same with Facebook. Facebook obviously has your entire deeply personal life history to be able to target you. What about Microsoft and Apple? Well, this is what I’ve been trying to reveal to you recently. Think about the concept of key logging and Windows Recall and Windows or the media neural hashing by Apple. Your personal data is on your device. Though Microsoft and Apple may not have been in the personal data collection business, your personal data is accumulated on your device.

Your activities and existing data can be scanned by the AI and can provide the personal profile that will be the context to the AI response. In other words, the RAG data. Data that can be used by Copilot and Apple Intelligence. Thus, truly to have a successful impact on persuading and influencing you, they need to know you intimately and fully, and they will. This is something I’ve really tried to explain to you in technical terms in recent videos. I’ve even shown you examples where I provided a personal context to the AI, having scanned a medical report of a fictitious person, Julie Smart, and then when I asked the AI if I could invite Julie to a party, the AI said I should not.

Hopefully you’re seeing some connections here now in my message. The secret to disabling the AI influence on you is by denying access to your personal data and your opinions. This threat will initially show up in the use of search engines, and then depending on your device, later in the use of Apple Intelligence or Microsoft Copilot, which would typically be accessed by voice. So to be specific, if you’re using Linux as your OS, then you have no fear as to key logging screenshots and media scanning performed by Microsoft and Apple. However, this does not defend you against the use of the search engines.

My old lessons still apply here. If you use browser isolation and control where your Google ID is seen, then following my advice would deny knowledge to the Google search engine of your activities beyond what you do when logged into Google. Now on the font side, using phones like a Brax II phone using Brax OS or a Google OS would automatically protect you from Google profiling. Using a VPN would prevent the AI from having personal data acquired from knowing your current IP address. Declining to use Gmail or mail provided by profiling organizations would then eliminate the possibility of profiling you by reading your mail.

The point is the same. Privacy protects you. It’s kind of amazing in a way that the same tools we’ve been promoting over many years are still protecting us even from a new threat with the use of AI for persuasion, influence and control. But here’s the problem. While we privacy conscious people who follow the directions I lay out are safe, the normies are very vulnerable. The more of us that are privacy aware, the more unsuccessful a devious entity will be in manipulating a population to do their bidding. The problem is that the threat footprint has increased with the use of AI.

The abilities of entities to control us has increased and our message to defend ourselves against this new level of danger must also increase. Folks, thank you for listening to me in my teachings. It is important that we spread the word and expand our privacy aware community to include more people. Please start with supporting us with likes and a subscription to this channel as the YouTube algorithm uses that to decide if others will see this video. I also created a company to support this movement by providing products that provide privacy solutions. We have a Brax virtual phone product that allows us to have identity-free phone numbers and even calling without the need for a SIM card.

We have the Google phones which have no Google ID to identify its owner. We have the Brax mail product which prevents your identity from being attached to your emails. We have bytes VPN and Brax routers to protect your IP address which is a frequently used identifier that can be used to profile us. All these products are on my platform, Brax May. We have a community there of over 100,000 users. Please join us there and the store with these products are available to you on that platform. Thank you for watching and see you next time. [tr:trw].

 

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

Author

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

BA WORRIED ABOUT 5G FB BANNER 728X90

SPREAD THE WORD

Tags

AI Influence and Control AI Manipulation of Populations AI User Manipulation Awareness of AI Manipulation Tactics Bias in AI Training Google's Use of AI to Alter Behavior OpenAI CTO Mira Murati Potential Risks of AI Models

Leave a Reply

Your email address will not be published. Required fields are marked *