📰 Stay Informed with My Patriots Network!
💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support My Patriots Network by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow My Patriots Network Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/MPN
🎥 Rumble: Rumble.com/c/MyPatriotsNetwork
▶️ YouTube: Youtube.com/@MyPatriotsNetwork
📘 Facebook: Facebook.com/MyPatriotsNetwork
📸 Instagram: Instagram.com/My.Patriots.Network
✖️ X (formerly Twitter): X.com/MyPatriots1776
📩 Telegram: t.me/MyPatriotsNetwork
🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork
Summary
Transcript
Why this might actually be somewhat genius for big enterprises chasing productivity gains. But for you and me, the personal computer users, it’s an absolute privacy, security, and autonomy disaster. We’ll cover exactly what agentic AI in Windows looks like, the enterprise side where it actually makes sense, the massive consumer backlash, and then buckle up a deep dive into every terrifying risk that should make you think twice before letting these agents anywhere near your personal machine. If you’re already sick of big tech forcing unwanted AI blow down your throat, let’s expose this mess together. Stay right there.
What exactly is this agentic AI push in Windows? Let’s start with the basics, so we’re all on the same page. Agentic AI isn’t your old school chatbot that just spits out answers. These are proactive agents that plan and execute multi-step tasks autonomously with deep access to your apps, files, browsing history, and on-device context. Microsoft’s current and upcoming features include at-tagging agents right from the task bar for instant help, hovering over files in Explorer to get AI summaries or actions, background agents that run long tasks and show progress in notifications, offline email summarization, and smart agenda previews on Copilot Plus PCs, deeper integrations into Outlook Teams and eventually third-party apps.
On Copilot devices with powerful NPUs, it gets even more invasive. Agents pulling from recall screenshot history and on-device models to anticipate what you need next. Microsoft demos it like sci-fi magic. Hey, Copilot, plan my entire vacation. It searches flights, books, hotels, ads, calendar events, sends confirmations. Impressive on stage. But when that same agent is rifling through your personal documents, banking tabs, private photos, and chat history, that’s where the nightmare begins. And there are demos on OpenAI of deeper involvement of agents like completely responding to your emails and making purchases autonomously. It’s potentially genius for enterprise with serious guardrails.
Okay, I’ll give Microsoft this. In the enterprise world, agentic AI could legitimately transform productivity if it’s locked down properly. They’re building enterprise-grade infrastructure like Agent 365. This is basically a central command center where IT teams can treat AI agents like they’re just another employee on the payroll. Registering them, setting what they can and can’t do, keeping an eye on their activities, auditing their work, and even firing them if things go wrong. Securely runs on Windows 365 cloud PCs. Agents can operate in safe virtual cloud-based computers, keeping everything isolated and protected from messing with the main systems.
Deep ties into Microsoft 365 apps. Seamless connections for automating stuff like sales pipelines, financial reports, customer support workflows, and IT management, making teams more efficient. Oversight through purview labels, intra-ID rules, and human checks. There are data sensitivity tags to protect info, identity policies to control access, and always looping in real people for approvals to avoid AI going rogue. But here’s the thing. All those fancy safeguards like the Circle Command Center for Managing Agents, running them in isolated cloud environments, tight integrations with business apps, and strict rules for protecting data and requiring human approval. Those are built for big companies with dedicated IT teams.
For regular home users, we get none of that. No way to properly register, monitor, audit, or shut down these agents if something goes wrong. No automatic data protection tags or access controls that actually mean anything in a personal setup. And definitely no IT department to step in and stop an agent from doing something stupid or dangerous. In other words, enterprises have real guardrails. You and I? We’re completely on our own with powerful autonomous AI digging through our personal files and no meaningful way to control or even understand what it’s doing. So yeah, control enterprise use case, potentially revolutionary.
But that’s a tiny fraction of the built-in plus Windows devices out there. The rest of us? We don’t have corporate IT departments babysitting our agents. Personal Windows users are getting screwed. Massive backlash proves it. Now let’s talk about the personal side and why users are losing their minds. Ignite 2025 drops the agentic OS bombshell. Windows Chief Pawan Devaluri posts excitedly about the future and gets absolutely buried. Nobody wants this creepy surveillance AI, more force-bloat, prioritize stability over experiment. He likes replies. Classic Microsoft move. Reddit forms, tech sites, all exploding. Users comparing it to past disasters like Windows 8 or force telemetry.
In the poster child of unwanted features, Windows Recall. Announced in 2024 as a photographic memory that screenshots your screen every few seconds, it costs immediate outrage. Security researchers expose plaintext storage risks, domestic abuse vectors, malware exploitation potential. This caused it to be delayed over and over. It finally rolled out generally in April 2025 for Copilot Plus PCs, but still labeled preview, opt-in with encryption and content filters. But those filters are far from perfect. Partial sensitive info still leaks through. Apps like Signal and Brave block it by default. And users are still furious. Why are you forcing a permanent diary on people who just want to browse game or work without their PC watching every single move? Microsoft execs seem generally baffled calling their resistance mind-blowing while ignoring low Copilot usage stats and painfully slow Windows 11 adoption.
Consumed revenue is basically flat compared to the enterprise cloud boom, yet they keep bundling these experimental intrusive features into the same OS we all use daily. It’s tone-deaf, arrogant and risking real user alienation. A terrifying risk deep dive why this is a disaster for personal use. Now we get to the core of why this isn’t just annoying, it’s legitimately dangerous for personal users. Microsoft buries vague warnings about novel security risk in their docs, but let’s go deep on every single one. Number one, AI companion is your permanent diary. Copilot Plus Recall Plus agent contacts equals an intimate searchable record of everything you do.
Every website, message, document, photo, search term, it’s not just history, it’s your unfiltered digital diary of habits, secrets, opinions, beliefs, relationships and health concerns. The point is that the technology forces you to maintain a personal diary. I get it when you’re a teenager and want to find yourself and read back your thoughts. Do I want someone else to eventually read it? It bothers me that the AI reads it. And the bigger point is that why am I forced to keep a diary of my life to begin with? The selling point of this originally is that it can help you recall something as if you don’t have your own memory.
But the real reason for this technology is so that the AI has data on you. Intimate knowledge of you and your diary so the AI can, in theory, respond to the world just like you. That we are on a mission to create a true AI companion. And to me, an AI companion is one that can hear what you hear and see what you see and live life essentially alongside you. Number two, that diary can be exposed. The problem is that once you move private information from your brain to your computer, it can be exposed. It doesn’t have to be some major hacking breach.
It could just be someone else in your household borrowing your computer. Now it seems that the personal computer has become very personal. With this kind of AI in place, I would be reluctant to have someone else even touch my computer. And as far as a real hacking threat goes, encryption and on-device promises sound nice, but breaches, ransomware, insider leaks or forced biometric access happen. The early recall builds store data and plain text. Proof, they can screw this up. One compromise in your entire personal history is out there forever. Three, AI agents do your bidding with dangerous autonomy.
Agents plan and execute chains of actions without asking permission at every step. This is the secret sauce of AI agents, autonomous actions. In fact, they can initiate other AI agents that are also autonomous so you can have a problem of actually understanding which branch of your orders are getting screwed up when they get screwed up. Autonomous risky because Microsoft I’m sure is fearful of any major AI errors. The current beta version do not enable any external interfaces with AI agents. All the actions are local based for now. But remember that this is not the end goal.
OpenAI has already been making models that book reservations and make purchases through external websites. I wonder who will first litigate the liability of the AI agent when it makes a purchase that the user cannot afford to pay. Autonomy equals massively increased risk. The fact that one AI agent can trigger another AI agent can lead to error cascades and make things exponentially more complex. Number four, AI agent black box opacity. LLMs are opaque. You see inputs and outputs, but the internal reasoning completely hidden. Biases, flaws, unexpected behaviors, no transparency, no real trust. When an incorrect decision is made by the AI agent, how do you figure it out? We’re not very at Microsoft, but the OpenAI beta models already do this.
You can have the AI agent autorespond to your emails. Sure, the AI may be able to emulate your style of conversation, but what is it promising in the email? What if it’s saying something wrong? We’ll talk about hallucinations later. Multi-agent systems where agents delegate, collaborate, and spawn sub-agents, you lose all visibility in the hidden AI-to-AI ecosystem driving outcomes. In a non-enterprise environment, is it really necessary to stress a personal computer user with procedures on guardrails for the AI? Number five, AI agents can be manipulated. They’re not free thinkers. They’re biased by design. These agents don’t start from neutral ground.
The underlying LLMs are trained on internet-scale data that overwhelmingly reflects the dominant viewpoints, the loudest voices, mainstream institutions, big platforms, and culture priorities of those who control the most content. Whatever perspective was most prevalent or rewarded becomes the model’s baked-in consensus reality. Then alignment layers add developer-defined values about what’s helpful or safe. The result, your agent isn’t executing pure objective ideas. It’s operating within inherited biases that subtly favor the worldview of the powerful. In enterprises, companies can fine-tune an override. At home, you’re stuck with Microsoft pre-packaged consensus nudging every recommendation, analysis, or automated decision. Let’s apply this to basic purchase decisions.
AI agents can be made to bias choices, for example, in the selection of phones best for privacy. Here, the recommendation is likely going to be for some big tech production phone, because certain times for privacy, phone models will be suppressed. I’m afraid, too, that the AI can be paid to influence you on purchase choices or its opinions manipulated by bots on Instagram or X. But when this can be dangerously manipulated is when there are censored topics where only one point of view is allowed, or with political controls which depend on who’s in power. Number six, AI agents can be inserted, and because they’re black boxes, you’d never know.
The opacity of these models creates the perfect vector for hidden surveillance agents inserted by updates, supply chain attacks, or regulatory mandates that activate silently. Think Apple’s abandoned 2021 CSAM on-device scanning plan, which was scrapped after a privacy backlash, or the EU chat control proposals that even after 2025 compromises still leave vague doors open for future monitoring. Imagine a quiet Windows update slipping in a shadow agent that only triggers on specific patterns, monitoring files, chats, or browsing, building profiles, and then reporting to HQ. Because it’s buried in the black box, you wouldn’t see it in Task Manager, you wouldn’t get a notification, wouldn’t notice any performance hit, independent audits can’t reliably prove or disprove its existence.
Enterprises can isolate an audit. Personal users are completely blind and vulnerable. Number seven, AI agent hallucinations. Plausible, but catastrophically wrong. Microsoft openly warns agents can confidently produce incorrect actions, deleting wrong files, sending fabricated emails, booking non-existent reservations. Unfortunately, this is so easy to document and duplicate. Ask it some really recent information, like who will be in the college football playoffs or which team is doing well. I found that even Grok, which is heavily reliant on current event search data, cannot accurately understand dated events. Every week, there’s a new ranking and new published game results, and it will speak with confidence about who the possible playoff champion will be without actual facts and unable to sequence content by date.
When it comes to technical details, it can become confused by the weighting of its machine-learned data versus current search data. But the problem is that in a hallucination, the AI is adamantly confident of its choice. This can be controlled in an enterprise environment where each agent is assigned particular tasks with limited outcomes, for example, like insurance claims processing. But this can be disastrous with things like personal relationships where everything is nuanced and it performs some automated email response where the AI misses the nuance, thus causing immense problems. AI trace logs, testing, and monitoring are nightmares. Fun-voluted logs, infinite edge cases, personal users have no realistic way to audit what the agent actually did.
Again, this is an environment better suited to the enterprise. If the agent is intended to replace employees, then you can test the outcomes and audit processes because that’s what an enterprise would do. How does personal use of AI allow for this kind of auditing and control? Can we even classify emotions and preferences this way? 9. AI agents can be phished or tricked This is one of the scariest aspects of AI in my mind because it’s actually easy to do. Given the fact that a co-pilot knows everything about you from Windows Recall, then a malicious email could phish you and make a seemingly innocent request.
For example, which might make the AI reveal personal information in a response. For example, a phisher can solicit feedback via email to profile people by profession and income using simple emails talking about potential employment. This is a very common phishing approach. But people who are experts know that you can force the AI to change directions by having it do unexpected things via trick prompts. It’s even given a name, prompt engineering. 10. The slippery slope in a full AI agent to AI agent world Here’s the world we don’t understand. Let’s say a large portion of the population uses AI agents to respond to messages.
Then it becomes possible that even external communications become just AI to AI nonsense. While this garbage can be tolerated in an enterprise environment where only business is discussed, what are the implications of a bot taking in the personality of its master, sounding authentic? Then full conversations and commitments are made all by agents without the actual real users being aware of what they’re committing to. And without the AI sensing that they’re not talking to real people, and the possibility that the whole conversation is made up from one agent to another agent. This is already happening in social media, but it can get more personal.
Microsoft’s fundamental misfocus The fact is, an agentic OS, as envisioned by Microsoft, is truly ill-suited for personal use. The privacy and security risks and lack of a methodology for a regular user to perform audits and establish guardrails make it unsuitable without the knowledge that would be required from AI experts. Microsoft is wrong to focus on agentic AI for the personal market. Microsoft real money is in enterprise and cloud. Consumer windows is basically an afterthought, yet they keep forcing experimental high-risk features on the masses for the sake of vision and hype. The backlash is clear.
Windows 11 adoption is sluggish. Linux and Mac OS mentions are spiking. Microsoft wake up. Separate the enterprise power tools from the consumer OS or risk losing personal users for good. This is potentially transformative genius for controlled enterprise environments. Absolute privacy, security, and autonomy disaster for personal Windows users. I would personally disable all embedded AI-related features of a PC for personal use, meaning anything that can run AI agents you do not control. In a recent video, I found that reinstalling Windows and using a local account disables much of the AI agent nonsense. But since this is impossible to verify completely in Windows, I would truly recommend that for personal use, and if you are interested in privacy, that end users should switch their computer use to Linux.
Otherwise, you’re going to have to make frequent changes to Windows 11 to disable agentic features that are independently discovered. It will be like constantly screening for spyware. I want to also bring to your attention that the Apple Intelligence and Google Gemini are not currently pushing for autonomous agentic AI in their operating systems, though this aggressive push is solely Microsoft’s. The other OSes are more focused on real-time AI automation and local use. After hearing what I said, which risk hit you hardest? The baked-in manipulation through bias, the undetectable surveillance insertion, or something else? Drop it in the comments below.
Like if Microsoft needs to pump the brakes on consumer agentic features. Folks, if you’re serious about privacy, come join us at Braxme. It’s the growing community where real privacy people hang out. No censorship, no nonsense. While you’re there, check out the tools we actually built and use ourselves. Braxmail, unlimited aliases, no IP leaks. Brax Virtual phone, real anonymous numbers. Bites VPN, no logs, no BigCorp BS. The Google phones and more in the store. The Brax 3 second batch is open for pre-order right now at Braxtech.net, the first batch sold shortly after release. Big thanks to everyone supporting us on Patreon, locals, and YouTube memberships.
You keep this channel alive. See you next time. [tr:trw].
See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.