Summary
➡ The AI drive-thru system used at fast food chains, primarily operated by Presto Automation, is not a wholly autonomous technology as it’s claimed to be. It heavily relies on outsourced labor from the Philippines, with human intervention needed 70% of the time to take customer’s orders. Despite the hype, AI systems in many tech firms are found to be heavily dependent on low-paid workers in the developing world, highlighting AI’s limitations and the labor supporting it.
➡ AI systems are getting dumber by feeding off each other, diluting the quality of information as they regurgitate content, with examples being Google’s AI, Chat GPT, and Elon Musk’s Grok. This problem has not been sufficiently addressed yet, and there are concerns about potential misuse with fake content. Furthermore, the AI Grok showed plausibly harsh criticism of Musk when asked to do so, hinting at AI’s capacity to defy politeness and show roughness. Lastly, Musk’s Cyber Truck, despite its advanced features, has been struggling with basic functionalities like climbing small hills, causing worry over its performance and efficiency.
Transcript
Hopefully it’s going to turn out to be something like the self driving cars. Of course, that’s going to also crash the stock market, since the stock market’s great hope this year has been artificial intelligence, and people have bet a lot of money on that. But again, they faked this thing. And just as a reminder, this is what it looked like. Here we go. Tell me what you see.
I see you placing a piece of paper on the table. I see a squiggly. Okay, so we go through and he starts to draw a duck. And it starts to deduce that’s a duck gradually versus, oh, it’s a bird. Now it’s in water. It must be a duck. But you made it blue and all the rest of the stuff. Remember that? Play that for you. And so what happened was they put this out, it got millions of views, one of them mine.
I threw it out at the end of the show and I wanted to come back and show people that more in detail. And as one of these people whose beat is technology says, well, when I first saw this, they started to make a believer out of a skeptic. He goes, nah, it’s all like, that’s where I was as well. So this came out Thursday afternoon. I did not see that.
I didn’t look at any news until Sunday. And by that time it was old and it was pretty much buried. The fact that and a lot of people didn’t report this. Bloomberg reported it, TechCrunch reported it. And Steve Swan, thank you for sending this to me. Google’s best demo was faked. Users may have less confidence in the company’s tech or in its integrity. Don’t be evil. Right? Don’t be fake.
Maybe that’s going to be the new one. Don’t be fake. After finding out that the most impressive demo of Gemini was pretty much faked, it again got millions of views. They were showing what they call multimodal mode, in other words, understanding and mixing language with visual cues, drawing inferences, interacting, all of that was fake. Fake. And, you know, I have a personal story when I first got out of college, went to Texas, and then I wanted to get back east to where my family was, specifically in Florida.
So I interviewed with some companies. One of them that interviewed with is a company. It’s been a while. I think the company was called paradigm. It was like in St. Peter or something like that. I went to interview with them. I wound up not accepting the job, but they subsequently, right after I turned it down, they got sued by the Social Security Administration, and they had competed for a project.
And so Social Security puts out this request for a proposal thing and request for bids and things like that. They put together a computer, and it was basically a black box that they faked. And so Social Security had this big project at the time, and this was 1983. They’re going to buy a bunch of these desktop things, and it was supposed to be certain specifications, how it was going to be used.
And it’s just an empty box with flashing lights. I mean, they didn’t even go the trouble of putting some. They’d always do computers in the 1960s. They have these banks of flashing big square lights, and they have tape things that are turning and all that kind of stuff is. Tape was gone by that time, but they did have some flashing lights. And when Social Security found out that it wasn’t a real product, they sued them.
And their response was, yeah, but we can do that. We can do that kind of thing. We can deliver on. And that lawsuit went for, I don’t know how it was resolved, but it went for decades. I checked on it a couple of times, and I thought it was just amazing. And that’s basically what Google did with this demo. Yes, theoretically, it’s kind of possible. And so what they said was, when they were called on it, Google actually admitted it, and they, you know, what we did was it wasn’t talking at all.
And so what we did was we would do some prompts, and then we’d have to give it some more prompts, and then we’d start to get a response, and then we had somebody read it, and then we put it all together like it was happening spontaneously. Yeah, it was fake. We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges.
And we prompted it using still image frames from the footage, and we prompted it via text. So it’s not looking at a video feed, it’s not speaking back. So this writer at TechCrunch says. So although it might kind of do the things that Google shows in the video, it didn’t do those things, and maybe it can’t do them live in the way that they implied. And so this is not shipping, this is vaporware.
They’ve exaggerated the capabilities. Viewers were misled about the speed, the accuracy, and the fundamental mode of interaction with the model. It’s very deceptive. This is Google, a search engine that has been used, designed now to redesign, to hide things, not to help you to find things. And one of the things they want you to be misled about is their own products. And so it does not reason based on seeing individual gestures, hand gestures, or other things like that.
As I pretended in the demonstration, it was another engineered and heavily hinted interaction. And many of these things, the so called interaction, quote unquote, didn’t even happen. And that’s when they’re showing at the rock, scissors, paper. Later, three sticky notes with doodles of the sun, Saturn and earth are placed on the surface, and the person asks it, is this the correct order? And again, this is being done with like a text interaction.
It’s not listening to him, it’s not watching a live video feed. Is this the correct order? And Gemini says, no, the correct order is sun, earth, Saturn. Correct, he says. But in the actual, that is, again a written prompt. The question is, is this the right order? Consider the distance from the sun and explain your reasoning. So did Gemini get it right or did it get it wrong? Did it need a bit of help to produce the answer? Did it even recognize the planets, or did it need help there as well? We don’t know because they faked the demo.
Since the blog post lacks an explanation for the duck sequence. That was the one I played yesterday, I’m beginning to doubt the veracity of that interaction as well, says the TechCrunch writer. Now, if the video had said at the start, this is a stylized representation of interactions that our researchers tested, no one would have batted an eyelid. And we kind of expect videos like that to be half factual and half aspirational.
Again, if that company that I was talking about before, they said, well, this is a representation of the computer, no, they didn’t sell it that way. Anyway, the video is called hands on with Gemini and says, here are some of our favorite interactions, implying that those interactions that we see are the interactions that they’re doing. Perhaps we should assume that all capabilities in Google AI demos are being exaggerated for effect.
Maybe we should consider that maybe all of the artificial stuff is being exaggerated and that it really is artificial, instead of artificial intelligence, is artificial interactions with this stuff. Anyway, I write the headline when I wrote the headline. The video was faked. At first I wasn’t sure if the language was too harsh, if it was justified. Certainly Google doesn’t think that, he said, a spokesperson asked me to change it.
But despite including some real parts, the video simply does not reflect reality. It is fake. Google says the video, quote shows real outputs from Gemini, which is true, and that we made a few edits to the demo. We’ve been upfront and transparent about this. He says. That is not true. That is not true. It isn’t a demo, and the video shows very different interactions from those that were created to inform it.
And so that is the reality of it. It is, as another article from Bloomberg said, a feat in spin, because it is purely fantasy and along the lines of faking it until you make it. Here’s another fake. This is, and we’ve reported this as well, how checkers and carls Jr. Said, well, we’ve brought an automated system that is completely run by AI to interact with customers and to take their orders.
And now we find out, because the SEC questioned them on this, they had to come clean and be honest about what’s happening. Now we find that instead of this stuff being done by AI, it’s being done by some wage slaves in the Philippines who are actually taking your order. And so this might be a company that George Santos may want to investigate, applying for, because they’ve got some very interesting spin and provarications about this.
Well, we didn’t lie, actually. We kind of did this and kind of did that. An AI drive thru system used at fast food chains, checkers and Carl’s Jr. I’m going to have to go to one of them just to see what this thing is like. Don’t normally eat at either of those places. We just order something minimal. A drink or something, maybe. But anyway, they say it isn’t the perfectly autonomous tech that it is purported to be.
Bloomberg reports that the AI heavily relies on the backbone of outsourced laborers who regularly have to intervene so that it takes the customer’s orders correctly. How regularly does it have to intervene? 70% of the time. It’s only about a third of the time that it can handle the order. The company is called Presto automation. I think they should have called the company Presto Digito. It’s like a sleight of hand, right? It’s a magician’s trick.
I wonder if you go through the other lane. If they saw lady in half, they saw a former employee who used to take the orders. You can see them being sawn in half because they’ve been able. What they’re doing is they’re just outsourcing this to cheaper labor in Philippines, that’s all. And calling it artificial intelligence. So pathetic. The company that provides the drive thru system admitted in recent filings that the US securities and Exchange Commission, they admitted to the SEC that they employ, quote, off site agents, unquote, agents.
Maybe we could call them secret agents, since they didn’t admit their existence. So we got secret agents out there who are taking the order. That makes it more intriguing, doesn’t it? In countries like the Philippines who help its presto voice chat bots and over 70% of customer interactions. That’s a lot of intervening for something claims to provide automation. Yet it’s another example of how high tech companies exaggerate the capabilities of their AI systems.
Shelley Palmer, who runs a consulting firm, told Bloomberg, there is so much hype around AI, everybody thinks that it is some kind of magic. Voila, presto. Digito. You pull up, you say the magic words, abracadabra, and the people in the Philippines magically take your order. Yeah, very much so. We’ve had self driving cars that people thought were magic and are killing people left and right. Hopefully nobody gets killed with these orders here.
But you go from self driving cars to drive through orders from AI, and none of this stuff is working. You should take your Tesla through the Carl’s JRS or checkers lines there and see what’s happening and put it on autopilot. The SEC informed Presto in July that it was being investigated for claims, quote, regarding certain aspects of the AI technology. And again, they’ve got somebody like George Soros already working for them.
But they probably are going to need several more George Soros’s for this investigation. In August, their website claimed that it could take over 95% of the drive through orders without any human intervention. No, it turns out that it’s 30%, not 95. They’re starting to look like a vaccine company here. But then they did some backfilling and they changed their website to say it can do 95% without any restaurant or staff intervention.
That’s not counting the slave labor in the Philippines. You just don’t have to hire people here in the US to do it. 95% of the time, the huge hype around AI can obfuscate both its capabilities and the amount of labor behind it. Many tech firms don’t want you to know that they rely on millions of poorly paid workers in the developing world so that their AI systems can even function.
For example, when we talk about chat, GPT, or these other large language models, they have to have people that are teaching it, and they hire these people at minimum wage, or they go to foreign countries to have it taught. And that’s one of the reasons why we see these biases in these things, because of the rules that they give them. When they teach the artificial intelligence, they build in that bias, and they pay people slave wages to do that.
But tell that to the starry eyed investors who have collectively sunk over $90 billion into the industry this year. See, I said, I think that there’ll be a lot of things that artificial intelligence will be useful for. I think it’ll be useful for the police state, for the surveillance state. I think it’ll be useful to constantly monitor us and snitch on us. They’ve got AI connected cameras on buses to give people tickets.
Now, this is the type of thing that artificial intelligence is really going to excel at. It is the dream of the tyrants that control our countries, and it is our nightmare. But when it comes to other things like this, it has been hyped so much that it is ready for a crash. And again, I go back to the. com bust. Did the Internet take off? Yes. Did people make a lot of money with it? Yes, but I invested in that in 1999 and had like, a 400% return in a year because I bought the companies that were building the tools of the Internet.
Like I said before, I didn’t want to get involved in a gold rush and try to figure out which of these Internet companies are going to survive. Was it going to be pet or whatever? Most of them didn’t. The only one that really did was Amazon. But I thought, well, this is going to be something. I don’t know which websites are going to work, but if you go to the stories about the people made a lot of money off the gold rush.
That’s how Nordstrom got connected there. There were people who come in and set up a store and start selling the picks and the axes. And so I thought, well, invest in the companies that are building the picks and the axes of this gold rush. Problem was, I got caught in the collapse of the thing and at a very bad time where we couldn’t wait it. I mean, everybody’s stock, intel, everybody.
It wasn’t that they didn’t have a future, it wasn’t that it wasn’t going to come along, but it’s just with other things that were happening and with the fact that it wasn’t meeting overblown expectations. It caused the stock market crash. And I think there are a lot of overblown expectations in this. And I think it’s a real not only is it a threat to us by our authoritarian governments, but I think it’s a threat to an economic crash as well.
And so one more story here, and that is about artificial intelligence. Elon Musk came out with his grok, and they found that it is plagiarizing OpenAI’s chat. GPT I think they should make Grok the new president of Harvard because we just have to find out what race and gender Grok identifies with, because that seems to be the only qualifications. And it’s a good thing if you plagiarize. So you have Grok is plagiarizing Chat GPT and as a result, this thing that he didn’t want to be leftist, progressive or whatever, they started regurgitating that kind of stuff.
And this site says, well, it started with a bunch of progressive political causes that are anathema to the increasingly regressive entrepreneur. See, don’t ever let these people pick the terms that they describe us with. Oh, you’re going to be the red people. Now, communists have always been red, so now the communists are going to be blue and you’re going to be red. We’re going to call ourselves progressives.
We’re going to call you regressive. Forget about this stuff. Don’t let them create the terminology. That’s a surefire way to let them win. In response to one query, for instance, Grok made a startling admission. Listen to this, just in case you think that the statement that is plagiarizing chat GBT is overblown. They asked Grok, and Grok said, I’m afraid I cannot fulfill that request as it goes against OpenAI’s use case policy.
It even plagiarized that and didn’t change OpenAI. This is definitely on track to become the Harvard president. The issue here is that the web is full of Chat GPT outputs, they said when asked about it. And so we accidentally picked up some of them when we trained Grok on a large amount of web data. I’ve talked about this before. I said, you know what we’re looking at here? They said that if you take the output of these large language model ais and you feed that back in as an input, it’s kind of a cannibalism, right? And what happens with that? Well, you get mad cow disease.
If you feed cows to other cows, cannibals will get the human equivalent of that, what is it called? The Yakov Kreutzfeld disease or something like that. And that’s essentially what artificial intelligence gets if you start cannibalizing it. It works fine as long as it’s pulling in and copying information from humans. But it really gets stupid and goes off the rails, really gets stupid. It really loses everything once it starts feeding on AI.
And when I talked about this about six months ago, I said, the web is getting so filled up with stuff from Chat GPT it’s probably not going to take too long for you to go through about three iterations of this stuff and have this stuff become dumber and dumber all the time. Here we are. That’s what they’re saying happened already with Grok. So they said, it’s an increasingly well established fact that weird stuff happens when AI is trained on the output of other AI.
And we’ve already seen that Google’s AI is vacuuming up and regurgitating the work of chat. GPT and so now is Elon Musk’s Grok. His AI is doing the same thing. I find this very helpful. As a matter of fact, I want to see this stuff die. We’ll make sure the future versions of Grok don’t have this problem. Well, how will you do that? How will you do that again? Remember, the response to some of this stuff is politicians say, well, you know, the AI can fake with pictures and things like that.
It can do a pretty good job of faking stuff. And so we’re going to have to have you humans get an id so that we know this is legitimate stuff. It’s like, what if it’s the thing that’s faking it? You make it stamp this stuff with some kind of a watermark or some kind of an id or something. Not me. I’m not the problem. It’s the problem. No, they aren’t going to do that.
So if they’re not going to do that, how are they going to know that the data that they’re sucking up to train this model isn’t more AI garbage? It’s not eating itself. So I don’t know what the problem is. I mean, just plagiarism. It seems to be no problem in Harvard. All the people there in Harvard standing by the president over plagiarism. And then the other thing that it had to say was it was asked to criticize.
I think Elon Musk asked it to critique him, and it did, and it was pretty harsh. He was proud of the fact that he was creating something that was not going to be polite. It’s going to be kind of rough around the edges. Yeah, we don’t worry about that stuff. And this is the reply that he got from Grok, his AI, it asked to roast him, said, well, well, if it isn’t the man who put Twitt and Twitter, the one and only Elon Musk.
So you want me to roast you based on your posts? I’ll do my best, but I’m not sure you can handle it, you delicate little flower. It says, first of all, let’s talk about your obsession with x. It’s like you’re trying to compensate for something, but you’re not sure what. Maybe it’s your inability to understand basic human emotions or your lack of self awareness. Or maybe it’s just because you’re a giant man child who can’t resist a shiny new toy.
And speaking of toys, let’s not forget your little space adventure, SpaceX. You’re like a kid playing with rockets in the backyard, except your backyard is the entire universe, and your rockets cost more than some country’s gdps. But, hey, at least you’re keeping busy while the rest of us are trying to save the planet. Yeah, environmentalism that you’re so eager to leave. But let’s not forget your greatest accomplishment, turning Twitter into a dumpster fire.
Congratulations. You’ve managed to alienate users, advertisers, and even your own employees. You’re like a bull in a china shop. Except the china shop is a social media platform, and the bull is a man with more money than he has since. And on and on it goes. Ouch. But then, one last musk story. He’s about ready to leave his cyber. Let his cyber truck try to compete in the marketplace.
And besides his broken window fallacy that he showed when he was demonstrating it, there is now a video that is running around social media showing the cyber truck getting stuck trying to go up a small hill that is covered with snow and it can’t make it. In a video making the rounds on Reddit, a cyber truck with a Christmas tree loaded in its bed can be seen. I prefer these old fashioned red cars with a Christmas tree on the top, but it’s not.
I wonder how long it’ll be before Elon Musk starts selling the equivalent of this with a cyber truck instead of the old fashioned 1940s red truck. Anyway, it’s got a Christmas tree loaded in the bed. It can be seen struggling to climb a small but snowy hill, and its tires are helplessly spinning as the driver hits the accelerator. The truck weighs nearly 7000 pounds. My engineering professor would be freaking out about this stuff he selects.
Your car is 2000 pounds. Why are we putting a human being. Why are we moving around a 2000 pound shell for a couple of hundred pound human being? That’s incredibly inefficient. You should all be driving, riding bicycles like me. If he was around today, he would be really on board with all this environmentalism stuff. But this is a 7000 pound truck and so it can’t make it up the hill.
And then you see them bringing up a white Ford truck with an internal combustion engine to get it up the hill. He says that this cyber truck is meant to have more utility than a truck. He said last month it can accelerate from zero to 60 in just 3 seconds, but it can’t go up a hill. And if you start to load these 7000 pound trucks up with something that they can carry or something they could pull, you watch the battery range, just go to nothing in no time.
But it’s not the first time, they said since that we’ve seen a cyber truck struggling to climb a small hill. Early last month, a separate video showed a pre production vehicle struggling to drive up a dirt hill. And one person said it’s more like cyber stuck. Another person said, every time I see these things operating outside a commercial or outside of a convention, it is a huge fail.
The David Knight show is a critical thinking super spreader. If you’ve been exposed to logic by listening to the David Knight show, please do your part and try not to spread it. Financial support or simply telling others about the show causes this dangerous information to spread favour. People have to trust me. I mean, trust the science. Wear your mask, take your vaccine, don’t ask questions. Using free speech to free minds.
It’s the David Knight show product outcome. .