📰 Stay Informed with My Patriots Network!
💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support My Patriots Network by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow My Patriots Network Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/MPN
🎥 Rumble: Rumble.com/c/MyPatriotsNetwork
▶️ YouTube: Youtube.com/@MyPatriotsNetwork
📘 Facebook: Facebook.com/MyPatriotsNetwork
📸 Instagram: Instagram.com/My.Patriots.Network
✖️ X (formerly Twitter): X.com/MyPatriots1776
📩 Telegram: t.me/MyPatriotsNetwork
🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork
Summary
➡ A group of teenagers have started a billion-dollar company called Aru, which uses artificial intelligence to simulate human behavior and predict future events. This method, known as silicon sampling, is being used to generate survey responses quickly and cheaply. However, there are concerns about the accuracy and reliability of this method, as it doesn’t involve human respondents. The company has attracted significant investment, but there are questions about its origins and the potential for manipulation of its findings.
➡ This text is a reminder to be aware of the influence of popular trends and ideas, which can sometimes be pushed onto us. It’s a message from a podcast called Corporate Report, which provides information to help people understand the world better. The speaker, James Corbett, invites listeners to explore more of his work on his websites.
Transcript
How stupid that is. That’s not even a poll. That’s just a definition. If you are MAGA, then you support Trump. I don’t know. It’s just, ugh. But no, actually. I am not talking about that, because yes, that was the stupidest poll I’d ever seen in my life until I saw a different one. And you’re gonna have to bear with me on this, because you’re gonna think, what’s so odd about that? It comes from an otherwise boring enough-looking article from Axios. Olivia Walton, States Must Lead on Maternal Health Crisis, which is telling us about a dire American crisis in maternal health.
Maternal mortality in the U.S. has more than doubled since the early 1980s. All right, interesting. Well, there’s a problem, and what kind of mainstream establishment, blah blah nonsense solution are they gonna propose? And then they hit you with this. Zoom in. New findings by Aru for Heartland Forward show that a majority of people trust their own doctors and nurses. Credibility rests on clinical expertise delivered through state-level channels. All right, so, so what’s the big deal? So some sort of poll found that a majority of people trust their doctors and nurses? What’s the big deal, James? No, no, no, no.
You don’t understand what this poll is or what it’s saying. You don’t know what Aru is, do you? And that’s something we’re gonna have to correct here today, because once you figure out what is actually being said and conveyed in this article, well, it’s, it brings a whole new nadir to the depths of reporting that we would expect from. Establishment repeaters and not just repeaters. What am I talking about? Well, first of all, I’m talking about, hello, I’m James Corbett of Libertas.Earth, and you’re tuned into Propaganda Watch, that discontinued podcast that I occasionally revive when there is some propaganda that is worth reviving this series for.
And this qualifies for that, because once you wrap your mind around what is going on here, I think you’re gonna agree with me that this is absolute insanity. But in order to set the table, first of all, we should establish the baseline. Hopefully by now, having been an ardent and a dedicated viewer of Propaganda Watch, below those many years ago when I was doing Propaganda Watch, you will already know that I know that all polling is establishment promoting BS. This week on the program, we’re going to examine a technique that is subtle, has a number of embedded assumptions and unstated motivations, and is extraordinarily effective for getting people to act in certain ways, or at least to believe certain things about the world that then informs a general viewpoint that motivates particular actions.
What am I talking about? I’m talking about the deployment of poll results, surveys, statistics. We’ve polled a certain number of people, and we found X. And this, in an unstated way, is supposed to get you to believe that X is true or right or valuable, and therefore to motivate you to act in a certain way. And this can be used in really innocuous ways, it can be used in insidious ways, it can be used in downright deceptive, manipulative, and malevolent ways in order to shape your viewpoint on a given topic. Now, hopefully you do remember and recognize that previous edition of Propaganda Watch from way back in 2019.
It was called Polls Show. People aren’t buying establishment BS. If you don’t remember that episode of Propaganda Watch, go back and familiarize yourself with it, or if you do remember it, re-familiarize yourself with it, because there are a lot of important points that I made in that video, talking about, wow, wouldn’t you know it? But this biomedical research institute has conducted a poll that just finds that the public wants government to spend more money on biomedical research. Amazing. And other such insights about what polling and polls that are being reported in the establishment media are really about, which of course is not to be some sort of objective arbiter of what the public are thinking, but to shape what you are thinking, based on group pressure and shame and other such tactics.
As I say, I go through a lot of specific examples of that in that episode of Propaganda Watch. So this is a subject that hopefully you, and definitely I, I’m already familiar with. But this Axios poll that we’re looking at today throws a whole different spin on it. So first of all, let’s look at the fact that finally, it seems even the establishment can no longer actually hide the fact that opinion polls are there to shape opinion rather than to report on them. And so we get this even from mainstream garbage propaganda outlets like The Guardian, which had this up recently.
Should we ban opinion polls? And of course, as we all know, any question in a headline should be answered with no. So no, we should not ban opinion polls, but we should be aware of what they are. And actually, this is an occasionally insightful article on this subject that goes through a lot of the points that I was making seven years ago. And coming to some of the same conclusions, a deeper question is whether polls actually create in whole or in part part what they purport to be revealing, i.e. not revealing the public opinion, but creating, shaping, molding that public opinion.
It goes on to talk about Walter Lippmann and his role in forming public opinion, and it talks about Walter Badshot, who remarked, it has been said that if you can only get a middle class Englishman to think whether there are snails in Sirius, he will soon have an opinion on it. And that goes to a point that I’ve made on this podcast in the past, which is that the part of the point is just to get you thinking about certain things. That can be half of the propaganda battle. If you’re thinking about X, well, then suddenly X is something that’s in your life and is important and you have an opinion on and you want to talk about and you want to share with other people and you want the government to take a position on X, etc., etc.
Suddenly you’re concentrating on X, where maybe the real action is happening on Y or Z or A or B or any of the other letters of this imaginary alphabet. Anyway, that’s an important point. Some other blatantly obvious points, but at least they’re making them. The way you ask the question, moreover, can profoundly influence the outcome. As I say, this article is maybe too little too late, but there are some insights in there, so I will at least put it in the show notes so that you can go and take a look at it if you are so inclined.
But let’s look back at that Axios article that we’re looking at today. New findings by Aru for Heartland Forward show that a majority of people trust their own doctors and nurses. Credibility rests on critical expertise delivered through state level channels. Of course it does. And everyone knows this. And we know because of this opinion poll, I guess, that was conducted by Aru, whoever that is. But, you know, the weirdest thing happened. Axios had to issue a sort of correction to this article. So a couple of days later, lo and behold, they had this version of that very same sentence.
Zoom in. New findings by Aru, an AI simulation research firm, for Heartland Forward shows that a majority of people trust their doctors and nurses. Excuse me, what an AI simulation research firm? What is that exactly? And what does that mean about whatever polling was supposedly going on here? AI startup Aru is a simulation company that believes it has cracked the code for predicting human behavior with higher accuracy and speed. It was founded just about two years ago by teenagers at the time. The startup is now valued at a billion dollars, and it has clients that include Ernst & Young, Accenture, Drugmaker Buyer, the film studio A24, Boston Beer Company, Spindrift, and others in business and politics.
Joining us right now is Cameron Fink. He is the CEO of Aru. Ned Ko is the president, and John Kessler is the chief technology officer. And gentlemen, welcome to all of you. John, I think you’re still a teenager, aren’t you? I still am, 17 years old. Thank you so much for having us. Were you guys 18 or what? 19? What are we at here? 19 and 18, we started the company 21 and 20 now. Wow. This is a joke, right? That’s got to be from the Onion News Network or something. That can’t be. This is real life.
These are teenagers running a billion dollar company that’s simulating humans for something. What is going on? What timeline are we living on? Oh, that’s right. Well, if you are interested, I would suggest you go and check out the rest of that interview on CNBC. Why not? In which you can find out more about this company and how it came together. These teenagers just all kind of meeting up on LinkedIn or something like that. And starting this billion dollar company that’s now simulating people in order to predict the future. And they have this bizarre discussion about GLP1 use related to alcohol usage and whether they are predicting alcohol usage to go up or down in the future.
A bizarre little segment in so many different ways, not just visually in terms of these 17 year olds who are just a few years older than my son heading this billion dollar company. Sure. Okay. Well, what is Aru? Well, let’s find out more. Let’s go to Aru.com. A-A-R-U.com where you can find out that this is apparently about rendering human granularity. Simulate any decision. Aru simulates entire populations to predict the world’s events. Welcome to the new age of decision dominance, guys. This is going to be great. And if you continue scrolling through this page, you’ll get their corporate blurb and spiel about money can’t reach the unreachable.
Humans are unreliable narrators of their own behavior. You’re not going to listen to people, are you? No, memory fails and incentives distort answers. Social pressure warps what people say away from what they actually do. So you’re going to have to interrogate every assumption. And in order to do that, you can see around every corner and built for the operator. Research fails to bridge the gap, blah, blah, blah. Leveraging our frontier research, Aru can recreate, recreate any population by dynamically generating infinite agents complete with nuanced realistic traits. Source, trust me, bro. Humanity at scale, they call it.
See the future, change the present. Curiosity has never been the barrier. The constraint was how much we could explore and knowing where to begin. The world moves too fast to wait for human inputs. So anyway, what is this all about? Well, okay, let’s get through the corporate gobbledygook and get to just an explanation of what this is. And for that, we’ll turn to this recent article from futurism.com. Foolish pollsters are now just asking AI what voters would say in response to questions and publishing it at face value. What could go right? And they pick up on that Axios article that we’re highlighting here today.
With in which Axios had to make that little specification, are you is what a simulation and AI simulation research for what? In other words, Axios had failed to disclose that it was citing alleged polling data that wasn’t drawn from human respondents at all. Instead, it was dreamed up by a large language model. Hey, Grock, how many people trust their doctors? Most of them. What’s the source on that? Trust me, bro. Okay, awesome. The practice that tricked Axios is called silicon sampling, and it’s a recipe for disaster. So magic words that you’ll know from the solutions watch series always important for understanding and being able to find and drill down on a phenomenon.
You need to know the words that they are using to refer to it so you can unlock the search for that. Silicon sampling is the phrase that pays in this particular case. So remember that, guys. The idea behind silicon sampling is simple and tantalizing, they write, because large language models can generate responses that emulate human answers. Polling companies see an opportunity to use AI agents to simulate survey responses at a small fraction of the cost and time required for traditional polling. If that sounds like vast overreach that could undermine the value of opinion polling itself, you may be correct.
Ding, ding, ding. And the rest of this article goes into just basically detailing that and the things that could go wrong and you can read about the research papers that have already been published on this topic and the ways that these eggheads in academia have decided that this might not generate accurate results. Well, you don’t really need peer reviewed published studies in order to understand that there may be something wrong with this concept of silicon sampling. But in case you do, well, here, here they are. So you can read about that in this article. If you’re interested, the link will be in the show notes.
But I think by this point, if you are if your Spidey sense is not going off, you do not have a Spidey sense at all. I mean, this isn’t even Spidey sense territory. This is just a flashing big alarm sign that if you fall for this, it is only because they are not the whoever is putting this poll out in front of you is not properly disclosing what it is. If they clearly stated, well, we asked AI how many people trust their doctors and it said most people, then people could take that information at face value for what it is, which is absolutely nothing.
But no, they just say Aru has found through a new study that most people trust their doctors and you will intuitively you’ll just believe, oh, that must mean there must be some pole in the old fashioned way. How do people think polling works these days? Anyway, do you think that people are still calling people up on their land line and going, hey, what do you think about this? How many times does that ever happen to you? And yet you read about all these polls. I wonder how they’re accomplished. Well, in this day and age, they don’t even need to involve humans in this in any way, shape or form.
They get AI to spew out whatever garbage they want. Or actually, they even just say that AI told them this thing. How do we even know that they’re even consulting a large language model? Anyway, trust us, bros. The AI told us that humans prefer this thing to that thing. Humans don’t like this. Humans trust this. And therefore, so should you. The manipulation here is so obvious that if I have to explain it to you, then I don’t even know where to begin. But hopefully I have unlocked a magical new door for you to explore this thing that you might not have known existed just a few minutes ago.
It is called silicon sampling, and it is a new trick to make you believe that everybody thinks X, Y or Z because here’s here’s this research firm that found this new finding that the majority of people believe X. Wait, what is this silicon sampling? That’s right. It is AI generated slop and they are asking you to ingest it wholesale and they might make a little correction to their article when they are called out on it like Axios was. But still, most people won’t understand what silicon sampling is or what it means. Once you do, you can use that information to do what you should always do whenever presented with any argument in the form of an opinion poll.
The first question you might have is, so what do I care? What do I care if 99% of people on the planet think the sky is fluorescent orange when I see it’s blue. Oh, whoa, whoa, it’s it’s pink. No, it’s blue. I see blue. It doesn’t matter how many people and you should always question the implied assumption that you should change your view of the world based on opinion polling. But of course, they’ll never actually explicitly state that because that would reveal the game too much. Now, on the question of Aru in particular, I don’t know clearly something like this, this billion dollar company that arises out of nowhere from the heads of these teenagers who are then going on CNBC to talk about the future.
There’s clearly something going on here with regards to this company in particular. So if there are any actual budding researchers in the crowd that want to drill down on Aru and where it comes from and who’s behind it, please do so because there may be more to this story. Just as one cookie crumb down that trail that you might want to explore, here’s a TechCrunch article on sources. AI Synthetic Research Startup Aru raised a Series A at a $1 billion headline valuation, which does note that some of the investors in that Series A funding round comes from General Catalyst, Accenture Ventures, and Z Fellows as well as Aasterisk.
I don’t know how to even say that. Abstract Ventures is fallacious. Okay, there’s some things to follow up on. And once you do, you’ll start to find out the General Catalyst and Z Fellows and some of these other investors clearly have ties to the tealverse and to the usual sources of Silicon Valley Capital. So we can tie the bow on that. And maybe there’s more to be found with regards to the history of Aru in particular, but I have a feeling that the silicon sampling will not stop and end with this one company. I think this is going to be more and more used to try to convince you that, well, whatever, whatever they want to convince you of, this is the new star political candidate that everyone’s rallying around.
Or, hey, everyone’s doing this new thing, so you should too. Hey, have you tried GLP 1? You might want to. Whatever it is, whatever it is that they want you to do, they will no doubt try to at least foist it down your throat with some silicon sampling. So keep that in mind. This is just a handy dandy public service announcement from Corporate Report by way of the sometimes occasionally revivified propaganda watch podcast. If you’re interested in more of my work, please go to corporatereport.com and explore the archives. You’ll find lots of information like this one that hopefully will help you make better sense of the world.
And on that note, this is James Corbett of Libertas.Earth, corporatereport.com, opensourceeducation.online. Looking forward to talking to you again in the very near future. Thanks for watching. [tr:trw].
See more of The Corbett Report on their Public Channel and the MPN The Corbett Report channel.