Do You Understand AI? There Is No Escaping It So Ill Give You a Solid Knowledge

SPREAD THE WORD

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

  

📰 Stay Informed with My Patriots Network!

💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support My Patriots Network by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals:  Kirk Elliot Precious Metals

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow My Patriots Network Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/MPN

🎥 Rumble: Rumble.com/c/MyPatriotsNetwork

▶️ YouTube: Youtube.com/@MyPatriotsNetwork

📘 Facebook: Facebook.com/MyPatriotsNetwork

📸 Instagram: Instagram.com/My.Patriots.Network

✖️ X (formerly Twitter): X.com/MyPatriots1776

📩 Telegram: t.me/MyPatriotsNetwork

🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork

  


Summary

➡ Many people use AI daily, but most don’t fully understand it. AI is being integrated into many aspects of life, including work and education, but it has limitations and biases. It’s important to be aware of the risks, such as privacy concerns and misinformation. Using multiple AI sources and not relying on AI for sensitive decisions can help mitigate these risks.

Transcript

Most folks are using AI every single day, whether it’s casually chatting with a chatbot or letting an AI agent handle real tasks and make decisions for them. Others are in complete denial insisting it’s not really intelligent, some fixate on every model’s political bias, and choose their AI accordingly. And then there are those who are genuinely fearful, convinced that every question they ask is being recorded, analyzed, and used to build a detailed profile of their personal life. Meanwhile, Microsoft, Apple, and Google are aggressively embedding AI into every corner of their operating systems, promising it’s all seamless and perfect.

At the same time, companies are integrating AI directly into workflows, already replacing real employees and reshaping entire jobs. After talking with the regular people, including my own family, I’m convinced the vast majority still don’t truly understand what AI actually is, what it’s really for, or what the real risks are. Today, we’re cutting through the hype, the denial, and the fear to build a solid foundational understanding. Because whether you like it or not, AI is now part of a daily life. It’s not optional. Kids are already using it constantly at school, and the long term effects on learning, critical thinking, and even their ability to form real human connections are raising serious red flags.

So do you really understand the true risks, from privacy and vision, and constant surveillance, to over-reliance, cognitive, atrophy, and workplace displacement? This channel is all about tech, with eyes wide open. I share the advanced, no BS knowledge, so you can decide for yourself what AI actually means for your life and your privacy. If you want to get really grounded on this topic, instead of riding the hype wave or rejecting it outright, stay right there. The opening experience we have with AI is the chatbot. Basically, this is a chat interface where you have a conversation with the AI.

Some of you have started doing this on chat GPT a couple of years ago. Some of you use Grok, or use the built-in ones like one included with Brave Browser. Many beginners don’t really know what an AI does, so they start asking questions about current events. Or typically, a new user will probe the AI for some political bias. Or the more common scenario for young people is when they ask the AI to do their homework. AI has limits, and you need to understand what they are. AI doesn’t know current events. Back in 2024, during the Butler assassination attempt on Trump, people started asking chat GPT details about that event, and then they were surprised when chat GPT claimed no knowledge of any assassination attempt.

Then those same people went to the press, obviously also dumb to how AI works, who then started making claims of political bias of the AI. This is truly a basic misunderstanding of how AI works. AI works by a process called machine learning. But since this takes a long time to perform, the models you use today may have been trained two years ago on internet data at that time. This means that by default, a regular LLM has no understanding of any current event. So if you want AI to analyze some new event, you first have to give it full details of what the new event is, then you can proceed with your question.

AI has bias. There’s no doubt that AI has bias, and this has to do with what dominates the data that is used for learning. For time there, a large amount of data used for learning was based on people’s comments on Reddit. So this is totally tainted by ideological subgroups that tend to dominate that platform. Another main source of data is Wikipedia, which as I discovered before, is not really a balanced source since wiki editors can be organized to push particular content, and intelligence agencies have floors of people dedicated to manipulating content in it.

Further, heavily weighted sources are peer-reviewed papers from academia. Well, this is certainly justified in scientific circles. This is also the same source used for social topics and often reflects the political bias of academia. Then we have the total absence of some information. Private corporate information of products that is not open source are not available for AI training. So AI machine learning is not complete, and it will use only what it has access to to determine its base intelligence. Additionally, AI model creators like OpenAI, XAI, Antropic, Google, or Meta will then put directives into the model before releasing it to set up guardrails to the AI’s behavior.

This is a direct application of bias in most cases, and you’ll have to trick the AI into bypassing these built-in directives. AI may optionally include current events. In general, a raw LLM model does not have current events or current data. As I already said, it’s not going to know the NASDAQ composite index today, for example, but a model can be given supplementary data automatically. Basically, you attach a search engine to the model, and then your prompted model is passed through the search engine so that the prompt is now modified to include current events, if any exist.

Then the model responds to the total consolidated prompt. Not all models will do this. If you do not use specific tools that enable web search, you will find that it is not included. The specific AI that is very good at integrating current events is Grok from XAI, so you do not have to do any implementation of web search. It is built-in. So I tend to use Grok for everything that requires current information and will work when you’re using XOR the Grok.com site. Brave Leo also automatically integrates with Brave Search. AI Creativity AI as currently designed is not creative.

It can’t come up with new concepts. It’s learned from past collections of thoughts of humans, so it is not able to create a new idea independently. It may seem creative by inserting randomness into the equation, which AI power users know as temperature, which is an instruction to the model to introduce more variance in results. This can make stories more interesting, for example. But from a factual science or tech-based point of view, it is completely worthless for originality. So no, there is zero chance it will make a script for me on YouTube, and you should understand that limitation too.

Now, if the model does not have data, it will do something called hallucinations, so it will actually make things up. So just be aware of that. That’s one of the creative aspects of AI that you don’t want. AI Risk, Disinformation, and Censorship The main risk from AI is disinformation, and not from knowing current events, but from intentional directives meant to limit information to the users. In other words, censorship. This is easily done by directives within the AI, and can be embedded into the models by Big Tech or by Chinese AI model creators to push a particular point of view.

As more and more AI options become available, this gets cancelled out as a threat, as more people are willing to use multiple AI sources instead of relying on one. So the problem is that some people make an AI choice based on a political bias. A common one is, I don’t trust Elon, so I won’t use Grok. Or OpenAI is left-wing, so I won’t use that. The best answer is to use multiple models. While it is true that these biases can be injected into a model intentionally, never rely on answers to questions that can be subjected to censorship or government redirection.

During the COVID pandemic, for example, would you rely on AI to make judgments for you on what vaccines to take? I wouldn’t. I’ll rely on my own intelligence. AI Risk Privacy The next risk from AI is when the model uses personal information and forwards that to a third party or to HQ. If you use a chatbot like a chat, GBT, or Grok, then obviously you’re sending your prompts to HQ directly to the maker of that AI model. What they do with your prompts is likely stated in some privacy policy of that particular entity, but in general, you have to assume that the data may be acquired.

It’s exactly what you would expect when using Siri, for example. Lately, Elon claims that Grok is now good at doing tax returns. So given this, would you actually send your tax returns to Grok, chat, GBT, or Claude? How about having it analyze your medical records? I hope not. This is a totally unsafe action, and if someone acquires your information this way, then it is your fault, so own up to it. On the other hand, benign projects like Do My Homework on Indian Tribes may not have much of a risk, as there is no personal information set with it.

If what you’re researching is public information, then you may think there is no risk to it. However, politically or ideologically tainted prompts may be used for surveillance just like how your search queries are currently recorded. If you do queries for sensitive topics like how to make weapons, then don’t be surprised if that ends up at Palantir. The specific question, though, is that people think they’re political leanings that the model will use for doxxing you in the future. This is a false analysis. No, the threat is a direct search of your queries, not from AI learning.

AI learns from billions and billions of data points. It is not going to be used to identify Joe Blow as the source of a political point of view. Instead, it will learn from all the different points of view without remembering the person’s name. AI cannot be used for doxxing, but query data or surveillance data compiled separately can be used by the AI to identify a target. And it already does this, for example, by having the AI search through Palantir databases. So the risk to privacy isn’t AI learning. It is the AI process capturing your prompts and forwarding it to a surveillance organization.

AI risk embedded in OS. This is the extreme danger of AI. This is what I’ve been describing as a true risk over and over. The worst implementation of AI is with Windows Recall on Windows 11. The concept here is that Windows Recall is a spy process that captures screenshots of what you’re doing on screen. Then those screens are analyzed and stored historically as text descriptions. Every time you use copilot, that history of your behavior on your computer is then compiled and sent to the model as part of your prompt. This can then end up either on a local AI model or more likely forwarded to a large AI model in the cloud at a Microsoft server.

Thus, this causes a huge personal data leak and risk. The problem here is that you don’t choose what Windows Recall is capturing. It captures everything you’re doing. While your interactions with a model include feeding it purposeful prompts, in Windows case it is completely a black box and this is the biggest personal threat of AI. Even with recent changes making it optional in some cases, the black box approach remains a serious privacy red flag. Always check and disable if possible or go to Linux. At the moment only Windows is designed with this particular usage of AI.

Currently Google and Apple are basing their AI actions on specific data you tell the AI. For example, if you say search my messages for my conversations with Dave, you are making the active decision to pull that specific portion of the data. This is still not 100% safe, but it is less of a risk than the black box Windows version. AI risk local AI. The other way to use AI is with a local model. This is with AI that you can run on your machine without connecting to any external party. Currently you can do this, for example, with a tool like Olama.

This is mostly how I actively use AI. A local model when used in this way, the risk of my prompts being sent to a third party are completely eliminated. In fact, this kind of AI use means the AI doesn’t even need an internet connection. However, having no internet connection isn’t completely practical because we already discussed the need to supplement the information with search data. So your local AI model can be hooked to do web search and then it will be just as useful using a grok. This requires some setup, but I can teach you all this if you want it.

Let me know in the comments if you want to learn this. AI agents. The more incredibly popular topic today are AI agents. This is an AI that leaves the chatbot stage and goes into the realm of actually doing things for you. This is what Windows is trying to do for you, but they will do it in an unsafe manner as I already discussed. But you can use AI agents in a safe way when you control the AI model and control the AI agent that does the work. This is the subject matter of my many recent videos on the use of an AI agent called OpenClaw.

I’m sure that much from now there will be a ton of variants of this. The main advantage of an OpenClaw is that it is software used to do tasks for you and it is powered by an AI model, but you control it all on your computer. No external party controls it. Thus it has none of the privacy risks or surveillance risks I mentioned before. While personal use of AI agents is mostly in the fun experimental stage, use of AI agents in the workplace is fully a for-profit exercise. I’m proving this concept myself by creating a customer service employee using OpenClaw on a local computer and having it interact with people on my Braxme app.

Unfortunately, this is not just a concept. Large tech companies are now firing programmers left and right because the AI can now be instructed to write code and the humans are just setting up rules on how the AI works and it can do it efficiently. Because of the seriousness of the use of AI in business, it is no longer a joke and my proof of concept of making it work, but in a privacy safe way, is important for the future as there is no escaping this. AI Deniers Some of you are AI deniers, those who claim that AI is nothing but probability rules and therefore has no intelligence.

This completely misses the point as it is currently impossible today to come up with programmatical rules to duplicate the logic of an AI model. For example, that has been tried in self-driving cars like Tesla where a hybrid of legacy code is mixed in with AI. Now Tesla full self-driving or FSD is totally AI based. How it does it is explained in an old video I made about transformers, but an AI denier will not watch that video. What I will say though is that it is irrelevant. Today AI is self-driving. Today AI can write code.

Today AI can replace specific jobs. Today AI can power robots. Is it super smart? Depends on a model. Like people some models are super smart and some are at an elementary school level. There’s no question that if you ask an AI it can give answers. Some answers are useful enough. The current world of AI There are additional concepts that I think are not particularly relevant for today’s use like the concept of AGI or artificial general intelligence. This is when an AI will be smarter than a human. That is a different topic and as much as that may be interesting to an AI denier, I call that irrelevant.

We have technology today embedding AI into everything. It can be smart or basic AI. But it automates tasks. It is part of everyday life now from clearing backgrounds on your Zoom call, to erasing people from photos, to writing essays, doing research, and now moving on to programming, checking your email and responding to clients. My job as I said at the beginning is to study technologies and explain them to you and warn you of dangers. Currently the danger is specific to sending your queries to third parties, having windows watch everything you do and then send your personal history to a third party and when the AI itself is being manipulated to do disinformation and censorship.

Beyond that, get used to it and it is not going away. Get educated. Control your tools. Protect your data. Folks, privacy is of course the main focus of this channel and I teach you technology so you understand the risk technology adds to your life. We have people who discuss these issues at my platform Braxmee. Join us there and become educated about these complex problems. To support this channel, we have some products in our store that provide the toolkit to retain your privacy. They are awesome products. We have Braxmail, an email service with unlimited aliases and identity protection.

We have Brax virtual phone, anonymous phone numbers. We have IP addresses and privacy invading laws. We have de-google phones, phones free from big tech tracking. The successful Brax 3 phone is open for pre-order right now at BraxTech.net and the new Brax open slate Linux tablet is also now a new project you can check out on BraxTech.net. Big thanks to everyone supporting us on Patreon, locals and YouTube memberships. You keep this channel alive. See you next time. [tr:trw].

See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.

Author

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.


SPREAD THE WORD

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our

Patriot Updates

Delivered To Your

Inbox Daily

  • Real Patriot News 
  • Getting Off The Grid
  • Natural Remedies & More!

Enter your email below:

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

15585

Want To Get The NEWEST Updates First?

Subscribe now to receive updates and exclusive content—enter your email below... it's free!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.