📰 Stay Informed with My Patriots Network!
💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support My Patriots Network by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow My Patriots Network Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/MPN
🎥 Rumble: Rumble.com/c/MyPatriotsNetwork
▶️ YouTube: Youtube.com/@MyPatriotsNetwork
📘 Facebook: Facebook.com/MyPatriotsNetwork
📸 Instagram: Instagram.com/My.Patriots.Network
✖️ X (formerly Twitter): X.com/MyPatriots1776
📩 Telegram: t.me/MyPatriotsNetwork
🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork
Summary
➡ Apple is struggling with implementing AI and is planning to use Google’s Gemini, paying Google a billion dollars a year for it. The current AI technology, like chatbots, can only respond to specific questions and can’t perform tasks independently. However, a new type of AI, called agentic AI, can independently perform tasks, such as processing insurance claims, which could potentially eliminate certain jobs. This new AI technology is expected to have a significant impact by the end of next year.
➡ AI technology is changing the job market, especially for entry-level positions like insurance claims processors and underwriters. A company called Anthropic has developed a standardized protocol, called Model Context Protocol (MCP), to improve AI’s decision-making process. This protocol allows AI to communicate with other software and access resources or tools it needs to complete tasks. However, this technology is still being developed and requires careful management to ensure the AI is making correct decisions.
➡ The text discusses how artificial intelligence (AI) is being used to automate tasks in various fields, such as insurance claims processing. It explains how AI can take over tasks like data entry, communicating with third parties, and even writing checks, which were previously done by humans. The text also mentions Microsoft’s development of an ‘agentic OS’ that can manage these AI tasks. However, it warns of the potential job losses and social implications of this automation, particularly for entry-level positions.
➡ The speaker discusses Tesla cars as robots with LLMs (Language Learning Models) and the potential for these robots to develop AGI (Artificial General Intelligence). They also mention the upcoming release of Optimus, a robot expected in 2026. The speaker explains the concept of MCP servers, which are programs that can perform specific tasks and communicate with LLMs. They predict that we will see many MCP servers in the coming year, which can be located anywhere with internet access. The speaker also mentions that Microsoft is already installing MCP servers in Windows 11.
Transcript
But I’m trying to alert you to, to AI and what AI is going to do now and if you’re not up to date on this or if you’re still saying AI is not real and you still have that kind of philosophy that this is something you got to avoid at all costs and all that, yeah, it’s probably not a good idea to think that that way because you know, forewarned is forearmed and in this world you, you better be the one controlling the AI rather than to be the victim of AI where you end up with no job.
So fortunately I don’t have to worry about jobs. So I’m way beyond that stage. I don’t think I’ve ever. When was the last time I worked for somebody. I think the last time I worked for some, somebody else was when I was 25. That was it. From then on I, I was always the, the CEO or the owner of the companies that I was always in. So, so I never had to worry about jobs. So some of you are like posting on, on YouTube. Oh, he used to work for Microsoft. Oh he used to work for Google.
No suck that. Never worked for any of those companies. Never worked those for any big tech company. Now I myself own the company that was a Microsoft solution provider and meaning we made software that you know, was built on Microsoft architecture. And because of that, you know, Microsoft gave us perks and I got invited to, to, to the Microsoft Office many times and, and so you know, I have a lot of connections during the time with Microsoft but obviously I have nothing to do with Microsoft. Since I’ve been doing this live streaming, i00 in fact I don’t use any Microsoft products.
I don’t use Microsoft Office, I barely use Windows. And yeah, it’s just I don’t have any more solutions or anything that rely on Microsoft at all. But in any Case the question right now is the definition and discussion of Microsoft. I mean, I’m sorry, agentic AI. Now just to give you a little bit of background about three years ago, it’s funny, it’s only been three years. Can you believe that? Three years. And in three years the world is like changed so much three years ago. And I believe I. When did I make my first AI videos? Just shortly after that and even I had to learn because I didn’t know what a transformer was and I had to go learn that and then come up with many videos to learn it even more.
And I’m, I’m a pretty heavy AI user. I use it extensively. I know it’s false probably to a T and I know when to rely on it and when not to rely on it. It’s, it has currently has a lot of flaws. I myself was promoting the use of Ollama, which is offline, offline AI and I was teaching all that and I was showing you videos on how to use Ollama and running it on your own computers or your own servers. And then after, you know, using it for a while I realized that that really doesn’t do the trick.
So I’ve been using mostly Grok lately I guess, but just by default it came with X and I have an X account, so it came free with X. And I have a paid Grok now, so I have a paid GROK account with, with, with X and, and I’ve tested things also, you know, using GROK and seeing, you know, can I teach Grok. So actually was testing that and actually have two, two Grok UIs, separate ones, one on X and one outside of X. And I was testing if you know, conversations that we had that I have with with Grok end up on, on Grok.
And it doesn’t, in fact I wish it did because I want to teach GROK and, and Grok doesn’t want to be taught. So the only way to teach Grok is to influence the actual social media and then the social media teaches Grok. So anyway, the, the, the original thing that you, you heard, you know, with chat GPT coming out three years ago, November 2022. So my goodness, almost exactly three years ago. And where we are now is, is night and day. So what happened is the 2022 to recently was the era of the chatbot. So our understanding of AI is that AI is a chatbot.
And then of course, as you know, last year Microsoft announced Windows Recall, which you know, raised the hackles of everyone using Windows and You know the whole story about that and the introduction of the, the see what you see technology so that the chatbot now sees everything and it supplements its knowledge with what it sees. And so you, you give it more information. And the, the term for it in AI speak is called context, meaning you make a query and it’s basing the, the query is called a prompt. You’re prompting the AI, and the AI normally will just respond based on its learned knowledge during the learning process when it was built usually one or two years before, which is the entire knowledge of the Internet.
And then after that, you, you, You ask it anything that is outside of what it learned and it doesn’t know it. So, you know, many were faulting AI for asking AI like ChatGPT about the Trump assassination attempt right after the Trump assassination attempt. And then of course, GR or I mean, the chat GPT will say Trump wasn’t, there was no attempted assassination on Trump. And then they come back and say that’s politically motivated. Then they go, go, you know, state on the press that AI is really bad because it lies to you. It, you know, all the politics, you know, it can’t even be honest about the Trump assassination attempt.
And because it’s lying, well, that’s because the people who were making those statements don’t know how, how AI actually works. So yes, AI can be completely biased, but that has nothing to do with current events. The fact of the matter is AI doesn’t know current events because that’s not part of its, its abilities. An LLM or large language model doesn’t, doesn’t actually know anything beyond what it learned on the Internet from a year or two before. Okay, so it’s always late. So I remember a couple years ago I would ask the AI like, you know, chat GPT or something, you know, what it knows about Rob Braxman, and I’ll get a lot of nonsense because, you know, it doesn’t really know.
And then nowadays, if you ask AI about Rob Braxman, it pulls a lot of garbage from Reddit. I’m truly garbage, like, you know, you know, my enemies. So which kind of goes to show you that there’s now integration between the learned AI and, and current content. So the idea of mixing historical data, I mean, sorry, learn data from machine learning with more recent data, meaning search is adding context to the AI and the AI is then able to merge the two. Not always precisely, not always accurately. But that’s, that’s the new story of chatbots. So for example, nowadays if you ask AI about A current event and it doesn’t know it from past history.
And it’s something completely new. It will just find it in search. So in a lot of ways it’s a merging of search and an AI, but it’s still a chatbot, so that hasn’t changed. So even Windows Recall is just an enhancement on the chatbot. So Windows Recall itself doesn’t add any, anything other than give the AI additional information about you personally. So it, it basically adds context. The, the term for adding background to the AI is called context. So, so in, in, in the Windows world, the context comes from Windows Recall, so it captures everything you do and then that gets sent to the AI.
In Gemini, the context comes from Google Safety Core, which captures your media and then summarizes the media and then that’s known as the current context for the AI. But it doesn’t really know yet you personally other than what it knows from your Google interactions, which it does know. In contrast, Apple also knows the interactions. I mean, sorry, knows something about you through the Media Analysis D module, which is analyzing your media and then it’s able to categorize your media and then knows that about you, but it doesn’t really know much about you yet in terms of other stuff.
Now the, the way that Apple Intelligence is being made is that it’s, it’s going to scour through your text content, meaning your text messages, your imessage, your emails, current data, summarize that, keep that in the context and, and then be able to reference that as the personal context, but not in the same way that Microsoft is capturing every screenshot. Apparently Apple’s not going to be doing that. And Apple is really way behind and is having really great difficulty implementing AI. So, so maybe I shouldn’t be so scared about Apple right now because they’re failing. So Apple is intending, because they, they can’t even come up with AI.
For Siri, Apple is intending to use Gemini. It used to be that they plan on using open AI for their Apple Intelligence, and now I hear that they’re going to go with Google Gemini and they’re going to pay Google a billion dollars a year to use Google Gemini for the iPhone. So, so that kind of tells you that a sense the, the Apple threat kind of diminished because, you know, my expectation of, of what they’re going to be able to do is a lot less than, than it’s going to be in reality. So, but all of this stuff that I’m talking about so far is still about the chatbot.
So we haven’t progressed. It’s just, you know, your, your, your, you know, a Siri version or a Google Gemini version of it is just a talking version of the same thing. So it’s still a chatbot. It just talks to you. Gro now has a talking one. And if you have a Tesla, you know you can talk to your, you can talk to Grok on the Tesla. So and it actually, it’s pretty good actually. You can use it to practice, practice languages. So I, I can use it to, to, to talk to Tesla in French and then it will actually, it actually picks up any language.
It’s pretty, pretty incredible. So you can talk to it in any language you want and it’s doing that. And again even that technology is still a chatbot. Now what’s not a chatbot? Well, a Tesla is not a chatbot. So that’s the first concept of, of a AI that’s not a chatbot and that is Tesla. So Tesla is obviously, you know, like waymo self driving intelligence. So you’re not talking to the car, it’s just, you know, you give it a destination and it, it goes there and deals with the intelligence of driving around in traffic. And I’ll be honest with you, a big chunk of my driving is now self driving.
It’s, I’m, I’m gonna guess, you know, if, if I have to make a guess, 80% of the time I’m just doing self driving and I just override in certain spots because I know that you know, the AI will have some weaknesses in finding a specific pothole that I know is going to be there. And so I’ll take over before the pothole and then I keep going with Tesla after that. So still, we are still in baby stages here with even including the Tesla technology. The, the change actually is in this new thing called agentic AI which you haven’t seen yet.
There are demos of it. OpenAI has demos of that and there’s limited versions of it on Windows. Pretty much fluff on Google Gemini and nothing on, on Apple. So Apple Intelligence is basically a bomb. There is no Apple Intelligence. So that’s, you know, I was like, I made a video though, like went viral talking about, you know, the ghost in Machine, you know, with, I presume the Apple intelligence will be utilizing, see what you see technology on an iPhone 16 and above which is featuring Apple intelligence. And then we have no Apple Intelligence. And here we are entering 2026 and there’s still no Apple Intelligence.
So, so that’s not even the true, true threat now. So what exactly is this new thing now called agentic AI? So the difference between the AI that you know from Chat GPT and all the talking versions of it after that and all the enhanced context with Windows recall knowing things about you, you’re still talking to the bot. So it’s still that kind of response where you, you ask the AI something and the AI gives you an answer. And that’s really where it ends. It gives you an answer. Okay. And so that’s not, that’s not the stage where it can take over the world because you know, somebody has to ask it a question, a very specific question.
It has to give you a very specific answer. Well, those days are just about gone now. And this is going to go very, very fast in, in the next year or so. So and the effect of this will, will probably be realized, I’m gonna guess by around next year. It, it’s gonna really, maybe by the end of next year it will really have a significant impact. And I, and I mean on, on jobs. So what happened with agentic AI is that it is no longer about the AI just talking in the chat. Bottom the new AI is a task oriented AI where you tell it to do something and it independently does it.
And this has nothing to do with privacy yet. This, I’m just talking about the technology in general here. The AI will work on the task independently and then at the, in the middle of the process the AI will seek information by itself and then at the end of that it will execute some sort of action and the action can be typically, you know, initiating some transaction. So I, I like to use an example that I’m very, very familiar with because I was a management consultant for, for, for this field when I was young and that’s insurance claims.
So, so I’m, you know, I, I was a management consultant and I, I studied insurance claims processing to the point that I actually sat there and documented and watched people do it and even timed them and timed how long it takes to do the job of a claims processor. Basically. I was an efficiency expert and did that for many years. That was one of my early jobs. And so I, you know, I understand claims processing very well and claims processing is an example of where a agentic AI would really function because the agent would examine. So just think about this.
I presume a claim will come in into some sort of form and back in the old days the forms were paper based, so they would come into a claim form and then a person has to read that claim form and then direct it to the Proper, proper claims adjuster who will then negotiate the various aspects of the claim. And nowadays we don’t do that anymore. You know, normally you go to the Internet and you, you put in a description and some website and you type in the incident and you upload pictures if there’s any. And that’s normally how you deal with it.
So the data is already in a way that computers can recognize. You don’t have to do, you know, image recognition or anything like that. So it’s already in that form. And so the AI can then read the claim, see if you are you, you’re a policyholder, if your policy is valid, look for fraudulent flags to see if there’s any, anything fraudulent in your request. Looks for evidence of, of, you know, the accident, looks at the description to see who’s at fault, and then makes recommendations for repair based on, you know, what, what’s available in the local area, prices, you know, what, what that repair is going to be and then, and then negotiates with whoever parties for liability and all of that.
And then at the end then the claim is settled. And if there are checks to be written, the checks are written and approved by, by somebody. So that’s the manual process today. And I’m going to tell you that that entire process, nothing there existed that, that could not be done by an AI. All of that, the entirety of that process could be done by some AI. And I don’t mean one AI, but a bunch of different AIs doing different tasks. And that’s gonna happen because instead of just a bot just talking, the bot actually extracts data from the claim, analyzes it and then puts data back in to a database, triggering some action that says, okay, initiating claim and authorizing repair on such and such a facility.
And then a message is sent to the facility, the repair facility, say you’re authorized to do the repair. We’ve examined the quote, quote is fine, and all of that, and then it goes to the entire process and then completes the claim process and gets signatures automatically maybe by DocuSign or some method like that. And you basically don’t need staff. Okay, so that’s an example of where, you know, a job can be completely eliminated. Now what, what will happen here is that AI agents like this, where it actually does something needs to be tested. So you don’t just create some, some way of doing this and, and then you, you leave it, you leave it alone.
No, it’s got to be monitored. There’s always a human check and there’s always, you know, checks along the way to make sure that the AI is not, not making bad decisions. And this is going to go through a lot of testing and all that. So, so the job that used to be done by entry level claims processors and entry, and even on the other side of an insurance company, which is called insurance underwriting, those usually are done by entry level people that come in straight from college. Those jobs will disappear, which is exactly what’s happening now.
Which is why new graduates are finding that they’re finding it difficult to get entry level jobs because the only jobs that will be needed are the management jobs that actually know the task. So they can examine the AI and say, okay, the AI is not doing the right thing here. The AI is making the wrong decisions. And they know that because they did the job before. So that’s, that’s the consequence of, of agentic AI and how it relates to the world. Now I’m going to be very specific here so you understand the technology and what it’s called and how they’re actually going to do this.
Because this is not theory. There’s actually pieces to this that are in place and can be programmed to be in place. And it’s not actually as simple as I described because I made it sound like, oh, you turn on the AI and the AI will do the insurance claims processing. No, it doesn’t work like that. Right now all the programming pieces still have to be built, but they came up with a way of standardizing this. And the company that made the standardized protocol is called Anthropic. So Anthropic makes claude. CLAUDE AI. So if you’re familiar with CLAUDE AI, you would have heard of Anthropic.
And Anthropic made the Claw desktop. Claude Desktop was the first to implement this. And what they implemented was a way for the CLAUDE desktop to communicate with, with other software using a common interface so that messages can be passed between the Chatbot Claude Desktop and some programs that actually interact. And the terminology for this interface is called Model Context Protocol. So you’ll see this a lot. Model Context Protocol or mcp. You’ll see it everywhere. I, every day I see a ton of maybe because I search for it once and now, you know, I’m never ending videos explaining MCP to me like I didn’t already know it.
So anyway, MCP Model Context Protocol is, comes in various flavors and the, the programmer creates a entity that manages services called an MCP server. So some of you may have heard of terms like I always get told that I need to define things in advance. So I’m going to define it for you, but I told you about the fact that AI needs context. If you don’t give it any information, it’s going to base its response on the learned data it learned during machine learning, which is two years ago or something. Right. So you can’t ask it a recent event like, you know, who, who was the killer in the Brown University killing, who was the killer in the MIT professor case in, in, at mit.
So if you ask an AI that the AI will say, I don’t know, that didn’t happen. Okay, so you give it context, the old way of giving it context, which is to pass a search link typically, or your own database. Like if you’re doing tech support, you can give it a tech support database. Let’s say you’re Samsung, you make washing machines, Samsung washing machines. And you want to have an AI answer questions about the washing machines. You link the manual, the entire manual and every document you know to the AI using something called retrieval, augmented generation or rag.
So that was how Comtex was provided in the past. It was rag. And the problem is that you know how something like that is some link to that, meaning putting some resource as something that the AI can be used was not standardized. So everyone had a different way. And so RAG was implemented differently by every company. And some will do this method and that method. And so it required a lot of programming to go figure out how you’re going to link AI to some of this, which slowed down the process of actually implementing anything in AI.
So, so Anthropic solved that by saying we’re just going to standardize and we’re going to have one way. And the way is simplified and uses common techniques, basically HTTPs, HTTP, similar to anything you do on the web and, and call it Model Context Protocol or mcp. So ragnow can be implemented via mcp. So somebody can make any utility like a resource, like a search engine, and create an MCP server and attach it to a LLM and the LLM can then use that MCP resource. Okay, so now we’re standardized now. So if you go to open AI, if you go to grok, if you go to Anthropic, all you have to do is link up the MCP servers and they can, you know, they, they all have standardized way of communicating and, and they actually have a nice hookup into, into the LLM so that the text results from those pieces from the MCP server actually go to the LLM.
So basically just text. It’s, it’s no more complicated than that it really isn’t. So the AI needs text, but it needs to be supplied the text in a meaningful way. Because AI has limited current memory, it’s called context. It has a limited space to evaluate the problem and that space that allows it to evaluate a problem is called context. And in most cases back in the day, the context limit was 4096 characters or 4096 tokens. Now you know, like a Grok will have 128, 128K. So you can give it, you know, a good sized document and should be able to process that, but you can’t give it an entire, you know, epic novel.
You can’t give it like 700 pages to something. It doesn’t know how to do that. So you gotta like piece the data in chunks so that the AI can use it. So the way that’s done now is standardized in something called MCP Server. So the Claw desktop, all of these tools. GROK even has an interface to it as well. Open AI and Microsoft. And the, the link is that the AI in the case of Microsoft is Copilot. Copilot has MCP clients that can talk to MCP server. So, so the way it’s done now is you present a problem to the AI and the AI will say, okay, given the problem at hand here, whatever you’re asking for, let’s say the problem is process this claim.
Well, the AI doesn’t know how to process the claim. So the AI can say, okay, I can process the claim, but give me the claim data. What am I supposed to do? And give me the resources to decide how I’m going to evaluate that claim data. So what it’s going to do now is say, okay, give me a list of MCP servers. So now if you are a maker of agents, AI agents, you just publish your capability as an MCP server and you, you publish it in a way that the AI can query it. And, and the AI could say, oh, claims reviewer, oh, claims reader, oh, you know, negotiator, you know, all these different modules that they can, that somebody can create photo analysis to analyze accidents, Fault, fault, the fault AI, you know, something that decides who’s at fault based on, you know, legal rules and so on.
So those could be individual MCP server. So the, the point is that the AI is presented with the usual LLM. It doesn’t matter which get, they can all work, can be in the cloud, it doesn’t matter. It doesn’t have to be on your computer because even GRO can deal with an MCP server, they just have to get access to it. So now a lot of these MCP servers are actually even online. You, you can, some of these services, if you look, there’s a actually MCP server list and then some third party already made their own MCP servers and then make it available, you know, for, for, for some subscription fee and then you can insert the MCP server to, to your solution.
So a big corporation is going to make their own MCP servers to, to do some specific tasks that they need to do themselves. But they’re likely, if they need to look at the Internet, they’re going to look for some MCP server that does search. So you can have an MCP server that does any Internet search. So, so that’s the, the, the thing. So now MCP servers are going to be grouped into two groups. So the two kinds of MCP servers will be, one will be resources and one will be tools. So resource MCP servers are ones that give information.
So in the case of claims processors, you know, some, something may say okay, who’s at fault? And that could be a resource thing where you give the AI agent data and it, you know, it pulls current legal, you know, data relating to, you know, who’s at fault. If, if the car got hit from behind, then the car hitting from behind, it’s at fault. You know, that kind of thing, left turn head right away and on those rules are already formulated and written down somewhere. And then the AI then provides that data as context and the, I mean, sorry, the agent provides that as context and that’s a resource based MCP server.
So that expands the capability of a chat GPT where it can actually have information related to the task which can be very specialized. So MCP server as a tool. But now the next step is, is MCP server as, I’m sorry, as resource. Now MCP server as a tool is now when it takes action and takes action by saying okay, I’m now going to file the claim. So that’s a database entry saying the claim is now official and we accept the claim and now we’re going to go process the claim. So and we begin things like getting quotes and those are actions they require communicating with a third party, communicating with the repair shop, communicating with the client, issuing checks, making payments to the claim shop.
And those are, so maybe there’s a check rider MCP server. And so it says okay, we approved it. So we’re going to send the check approval to the check writing MCP server. The check writing MCP Server takes over and writes the check. So basically these are all programs that would have been written anyway because they’re, you know, some, some person used to go type the check into the check register for accounting for the insurance company. And you know, that used to be automated. And you know, every night the computer I used to work in, you know, in programming in insurance companies and, and so the night run, the servers would run and you know, calculate and then create the checks and then it’ll be tons of checks printed out the next day and then the machine will put in envelopes and all that.
So that is now where the AI takes that role and does it through a link through using the same programs. Not that doesn’t really change. So the claims processing programs that exist or still exist, but instead of a human typing it, the data entry will be done by, by AI driven by an LLM. So as I’m saying here, because of this protocol that simplifies the communication so you don’t have to guess. So somebody coming from a third party can create a MCP server that can be used by all insurance companies in theory because there wouldn’t be any difference to claims processing from company to company.
So a company specializing in making MCP servers for claims processing could make one and then connect to the, connect the back end to the, to the corporate systems and then there you are and then now you can get rid of claims processors and now you, you basically can eliminate those jobs. Okay, so that’s, that’s the reality of, of what’s going to be happening now. So this just started. This just started. So, so Microsoft is creating this thing called an agentic OS and what it did was built in the agents to Microsoft itself. So Microsoft the os and Microsoft is the only OS that can do this.
So in a way you might think that Microsoft are worried because you, you’re going to go to Linux and you don’t want to use Microsoft Windows anymore in the corporate world. They are so powerful that they can actually control these environments like I told you. And they can run the computers in the cloud without a human being. So a Microsoft machine can run and replace these various people that I’m talking about. They can just be machines running copilot running in the cloud, doing claims processing. And Microsoft will just treat them as employees and you just fire up a claims agent desktop and it will run on the cloud and then when you don’t need them anymore, you fire them.
Okay, so that’s the environment that we’re talking about here and that’s what’s being built into Microsoft os. So the registration of MCP servers and MCP clients is now part of the MCP registry. So Microsoft itself made, made those connections so that any app maker can make an app, run it on Windows. Running over Copilot. Copilot then manages the, the agents and agents can initiate other agents. So the claims processing agents for example, can decide if the claim is, is, is approved and then the, it can trigger the agent that actually writes a check. Okay, so that’s agent to agent kind of talking.
So that is the, that is the reality of what’s happening today. And I, I, I don’t mean to scare you, but that’s, it’s gonna change the world so fast. And you know, some of you are like saying oh, we, we don’t want to understand AI. We don’t want to understand the effects of this. It’s already being felt by many, many young graduates who cannot find the job because a lot of the entry level jobs are gone. Everyone is expected to already be an expert. So if you are entry level, you don’t know what you’re doing in an insurance company, then they’re no longer going to have beginners at an insurance company.
This is a terrible thing. I don’t know the social implications of, and even managerial implications of not having resources to grow into managerial positions when you do not have entry level positions. I don’t know how that’s going to work out. So, so there will be quite a bit of disruption here. And this doesn’t even get into the privacy and the Microsoft moves into your personal relationship with an agentic os. This I haven’t said anything about a personal relationship with an agentic OS which is the subject of the, the upcoming video. But I want you to understand that this is something to think about in, in a big way.
And when you, you see this and you start to understand that maybe Microsoft isn’t so stupid after all, then just don’t care. There’s something you got to think about. Does Microsoft really care if you do not like AI? So some of you, and you know, I’m, I’m going to tell you right now, the, the bulk of my followers hate AI and are looking for ways to make sure that they don’t have any connection with AI because they look at as a privacy invasion. But you really can’t look at it like that. You have to understand the pieces so that you know what part of AI you’re gonna have to deal with.
Because regardless, you know, in most jobs there will be some Effect of AI on your job, if you’re still working, if you’re not working, then that’s a whole different issue. And you, if you’re retired, you know, and you don’t have a YouTube channel to run, maybe you don’t need to worry about AI so much and you can just say, okay, I’ll just avoid that. And in which case then you need to know how to step around that. But the point is that for the average person, AI is going to result in huge changes. And one of the changes by the way, is not necessarily negative.
It means that those who are working will have a lot more free time. Jobs will be easier because it’ll be more thinking rather than actually paper pushing or anything like that. So it’ll be more like management. So those with management skills can manage an AI. And I, I think that, that, that, you know, is, is expected to continue to be a skill that will be needed. But the, the, the doers, the ones that do the work, some of those people will be needed but in a lot lesser quantity. So mostly they’ll be needed to train the AI on what to do.
So in the case of, you know, claims processing, insurance underwriting, I hate to use that as an example because some of you may be in that field. It’s just a field I know well and, and I know it from multiple fronts. I know health insurance claims. I, you know, I’m, I created health insurance systems. That was, my company created billing systems for health insurance initially and then I got into one of the inventors of electronic health records. So I developed electronic health record systems. So I know this very well and, and in fact I can see even in the healthcare field how some of this is going to affect them as well.
But you know, the claim side, the claim side is definitely, definitely one of the things that is very rote and you, you, you can actually, you know, they operate off their fixed rules and it would, it would be easy for that to be implemented by AI. Do you use macro hard? Macro hard. What is that? No, I don’t know what that is. Will this agentic solution be integrated into robots for more invasive job replacement? So I was actually going to discuss in the video and maybe I don’t have enough knowledge to, about robots to create a full script on this, but obviously a Tesla is a robot.
So I have, you know, I have daily experience with a robot since that’s basically what a Tesla is, it’s a robot. And that robot has Grok. So the robot has LLMs now an LLM. So my robot car has an LLM. And this is one of the reasons I, I use, I use a Tesla is because I got to keep up with the technology and I got to understand what it’s doing. And, and I can’t, I can’t do that by saying I’m afraid of it. I, I gotta like understand it to see what the risks are. And it, it’s fine.
There’s no, I don’t see any current problem with having a Tesla. So, so, so anyway, the, the, the Tesla is a robot and you give it instructions and you override it and you do all that. And, and some of the topics that you hear a lot about is what happens when AI becomes ultra intelligent with getting AGI or artificial general intelligence, meaning it’s just going to be smart on its own where it doesn’t need to be taught anything. It just learns on its own and decides things and creates its own solutions to things and that scary world that people are defining with AGI.
I, I don’t know what, factually, if we’re going to get to AGI anytime soon, you know, Elon Musk is saying that you can just need more compute. And I’m actually of the opinion that LLMs can probably not be AGI ready. LLMs? The way, I mean, I made a video about how LLMs actually work with transformer architecture and how they pull words and sequence one by one and you know, they don’t even know what they’re gonna say until they say it. So that’s why they can come up with nonsense. So now they’ve solved that by going through a thinking phase.
Run one, and then after they do run one, they do run two, so they can analyze what they just said and then run it again. So that’s how they solve that. But that’s not real, real intelligence in my mind. That’s not, that’s not my idea of true intelligence. It’s, it’s almost like trial and error. Let’s see if I could come up with a better answer if I do it twice. So, so it’s the limitation of the technology of the LLM, how LLM works. So somebody has to come up with a new way of understanding intelligence that goes beyond just the LLM way.
And when that happens, then we’re going to be really afraid of these robots, because can you imagine a robot with AGI? So robots are going to be a big thing. As you know, Optimus is being slated for release. What? Some people will have some version of a robot in 2026. I never thought I’d be alive for that. But 20, 26, and you could have some robots and five years from now there be could be many robots. And you know, how does that interrelate with your life? You know, what, what does that do? Is somebody doing your laundry now and clean, washing your dishes and do all that? And is that all you want the robot to do? So, so anyway, there are many implications of this that can, you know, be subject to multiple videos and I don’t really know yet.
It’s, it’s something there. Let me tell you. I just gave you a backgrounder today tonight. I gave you a background to what’s happening. The idea of how they’ve already finally made a common interface so that the communications between different kind of LLMs and the physical world or the task world using computer tasks and programs is now possible because you don’t have to created from scratch. There’s a standard interface of passing messages between an LLM and, and any kind of program. So, so yeah, so that’s, that’s here, that’s now here. And so you’re gonna see in the next year a whole ton of products called MCP servers.
So you’re gonna be seeing MCP servers left and right, and they’re going to have different functions and some will do very specific things and some will do more complicated things. But let me just tell you this. Even this MCP server concept is actually at such a beginner stage because they’re not really indexed yet. For example, what if you have two MCP servers with the same name? How do you know which task to do? Let’s say the, let’s say the task is process, claim, or let’s simplify. Let’s say the task is search the Internet and you got two MCP servers and both of them search the Internet.
Right now that’s going to crash the LLM because the LLM’s not going to know. I don’t know which one to use crash. So it, it’s not intelligent enough to pick because it’s really a programming interface. It’s not, it’s not an intelligent interface where they randomly decide what to connect to. What happens actually is this. The LLM says, I got a problem to solve. All of the MCP servers, please announce yourselves to tell me what you all do. So every MCP server registered to the LLM will say, I can search the Internet. Oh, I can process claims.
Oh, I can, I can make reservations for you. Oh, I can, you know, buy groceries. I can buy groceries at Whole Foods. You know, something specific like I can buy grocery, I can buy products from Amazon. I mean it could be that kind of specificity. So those MCP servers announce themselves. So when the client says I want to buy something and it’s just general, I want to buy toilet paper. And then you tell the LLM that and the LLM says okay, MCP servers who, who are able to buy toilet paper or buy, you know, household items and then they raise their hand said I’m, I’m the MCP server that can do that.
But if the user says I want to buy toilet paper from Amazon and then the, the LLM can say okay, who are the MCP servers that can buy from Amazon? So anyway, that’s, that’s the world. And you, you will, you know, you will see this in the coming year. Really just coming year you’ll see a lot of these. And MCP servers just needs to be connected to the LLM. They don’t need to be part of any operating system. They can be in the Internet, they can be local. Microsoft makes a way to, you know, program your own MCP servers locally.
You using VS code. If you’re a programmer, you can just make an MCP server and that will, you can test it locally against your mcp, your own MCP server locally. And then when you want to put in production, you can put it somewhere else. You can put on the Internet, you can do whatever you want and then you just provide the, the location and say oh, such and such MTP server is at this URL and this port and then you’re an MCP server. So basically MCP servers can be anywhere where Internet access can be local. All you need to do is register yourself with the LLM of the user to say I’m here to do the work.
So we’re not at the stage yet where MCP servers are just randomly installed on your computer. However, Microsoft is installing pre installing MCP servers for copilot. Like you already know. Windows Recall, File Explorer Search, Reading documents. You know, there’s, there’s, they have a, they have a long list of things that are already included in Windows. So there are pre installed MCP servers. So that’s just so you understand the terminology. If you are on Windows 11, there are pre installed MCP servers. Some of those are going to be dangerous like Windows Recall. Okay. And then you have the option of registering more MCP servers install them just like any other kind of server.
You have a MCP server installation. You’re going to see this, this kind of stuff okay, so that just gives you a little bit of background. And then I’ll tell you in the video that’s going to come out on, on January 1st, I’m going to give you a break and on negativity for, for the Christmas season here so you can just think about happy things in the next two weeks. But then after that, then we go back into Microsoft and I’ll tell you specifically about the agentic os. Agentic OS implementation that Microsoft is doing and how that’s impacted by personal use and how the personal use part is very badly made.
[tr:tra].
See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.