Summary
➡ The text discusses various issues ranging from the concept of blue ducks and rubber ducks, language translation, playing a game of guess the country, to a deeper debate on the development and potential misuse of advanced Artificial Intelligence (AI) in warfare by nations like the U.S., China, and Russia. The alarming speed of AI advancement, issues of ethics, security, and autonomy are highlighted, with warnings from scientists, diplomats, and military officials largely ignored in the perceived arms race. The potential consequences of AI-enabled weapons and the need for legally enforceable regulations are also discussed.
Transcript
We have patriotic undocumented migrants. Yeah, they are of military age. You might want to ask yourself. So are they coming here because of altruism? Are they coming here because they love freedom? Are they coming here because they want free stuff? I think the people who are coming here because they want freedom are doing it legally. I think the massive armies that are coming across the border are not a part of that.
I think they’re part of criminal organizations that are coming in, or they’re foreign agents who are coming in, or they just want the money and the free stuff that we’re offering everybody. The big welfare magnet, which is at the core of all of this, the big welfare magnet to help bankrupt us faster than we would be bankrupted by military empire. But what a joke this guy is. One commentator said, a buddy of mine and myself were talking about this maybe two years ago about why the government was housing illegals, more specifically military aged men, on bases in Texas and then flying them around to other bases in the US.
And we concluded it was to make out an army that the government could fully control against the constitution without allegiance to America and its own citizens. And so here we are. I think that’s a big part of the great replacement, replacing the military. Now, if you can’t get foreigners who will just take orders and don’t care about people in America, again, they’ve driven out people who care about the constitution, people who have religious objections with a jab, they’ve driven out people who don’t want to do their diversity, equity and inclusivity.
And so the other aspect of this, and it’s one of the reasons why I got Peter Charrey, who is heavily involved with the military industrial complex, these ngos and think tanks, and he’s part of that whole constellation. But his book, four battlegrounds, was looking at four battlegrounds against China. They are laser focused on fighting China. And what was interesting about the book was what he had to say about artificial intelligence, and he’s intelligent enough to understand what a real threat it is.
And I’ve been interviewing people on this topic for over a decade. Noel Sharkey, the coalition against killer robots. There are some people globally who are trying to stop this. But it is an arms race and it’s not going to be stopped. It’s not going to be stopped politically. There’s not any support for that politically to stop it. And it doesn’t matter if there was a big UN agreement.
If America, Russia and China want to pursue this in an arms race, nobody’s going to stop them. Who’s going to enforce a UN agreement? And neither of these three are going to unilaterally disarm themselves. And so where is this headed? A recent boardroom drama over the leadership of OpenAI, where the Chat GPT thing, Sam Altman, as you saw that, he gets kicked out and then he comes back and people are speculating as to why that happened.
Well, he said it was a corporate power struggle. Also, personalities are involved, but it was more than that, writes the nation. And this was picked up by zero hedge. It was about all those things. But at the heart of it was an unusually bitter fight between company officials who favored unrestricted research on advanced forms of artificial intelligence and those who, fearing the potential catastrophic outcomes, sought to slow the pace of AI.
At approximately the same time this was happening at OpenAI, there’s a similar struggle in the UN. And so you’ve got several different countries, again, who want to stop this arms race over autonomous killing machines. And the key thing, as Peter Charay was saying, is that it eventually and is inevitably going to lead not to them being autonomous, to them deciding when they’re going to kill. And as he said, how do you shut that off? It’s like what I said about the mrna shots.
Okay, so you’re going to reprogram the cells to start replicating these spike proteins. How do you turn that off? Well, it turns out you don’t turn it off. And he said, nobody’s got an answer for how do you turn off these autonomous killing machines once they get turned on? And of course, there’s also the cybersecurity issues, since we’ve had all kinds of hacks into the Pentagon, the military, the CIA, the NSA, and since they’ve gotten everything about the f 35, they’ve reproduced it, but they also seem to have control over the avionics and the internal computers.
They were able to, somebody was able to eject a pilot from the f 35. So that’s the real danger. Are they going to hijack our sophisticated planes? The more sophisticated and the more connected they are, the more vulnerable they become. Same thing with the artificial intelligence and the killer drones. And so again, how do you stop these things, and it is going to inevitably be a race to having them be under autonomous control, because, he said, if you get humans in the loop, they’re going to be too slow.
So everybody realizes that the faster decision, the people going to win are the ones who make the fastest decision, which are going to be the ones who turn it over to their killer drones. But then how do you get that control back? And so there was a fight in the UN over the development of these autonomous killer weapon systems, drone ships, planes, tanks, operated by AI rather than by humans.
A broad coalition of diplomats and human rights activists have sought to impose legally binding bans on devices called killer robots. But again, how in the world do you. How do you enforce such a ban against military superpowers? There’s such a disparity there that even if you brought together a coalition of other nations, they couldn’t do anything about it. It’s kind of interesting that Netflix has a documentary about artificial intelligence and the end of the world.
It’s called leave the world behind. On Netflix, it’s about a cyber apocalypse. But even in a cyber apocalypse, notice who the villains are. I’m asking for you to remember that if the world falls apart, trust should not be dulled out easily to anyone, especially white people. Even mom would agree with me on that. Especially white people. You see? Mom would agree with me on that. Netflix agrees with her on that, because they’re funding this.
Again, always white people are the target of all of can’t. Even though there’s real danger, even though we have to be concerned about these autonomous killing machines, we’ve got to understand the real problem. White people don’t trust them. Sam Altman, evident that the crisis was triggered by concerns among members of the original board of directors that he and his staff were veering too far in the direction of rapid AI development, despite the pledges that he made on Capitol Hill to exercise caution.
Why was he on Capitol Hill? He was there basically to make connections to politicians, to do their bidding, just like Elon Musk. He’s a younger version of Elon Musk. He wants to be the world’s richest man in terms of both potential upside and downsides. Superintelligence will be more powerful than any other technologies that humanity has had to contend with in the past, said Altman. So you better be on my good side.
You better give me a monopoly. Say I’m with you, but we don’t want everybody to have access to this thing. So first of all, you need to give me a monopoly. Then some nice government contracts. Do you really think that these billionaires are not shaking them down like that? We can have dramatically more prosperous future, but we have to manage the risks. Here we speak of the risk, and you look at what is coming.
I played just a little bit of this the other day as the show was ending, but not much. You had Google demonstrate Gemini. They’re going to make it available in three different forms. But what this is is there’ll be kind of a small dumbed down version for phones and things like that. But this is really where the state of the art is right now. Look at this. Demonstration.
Testing Gemini. Here we go. Tell me what you see. I see you placing a piece of paper on the table. I see a squiggly line. What about now? The contour lines are smooth and flowing with no sharp angles or jagged edges. Still doesn’t recognize a duck. Like a bird to me. What if I add this? The water is swimming in the water. It has a long neck and beak.
It is a duck. Yes. A duck is a type of waterfowl in the family anatode. Ducks are closely related to geese and swans. The duck appears to be blue. This is not a common color for ducks. However, there are some breeds of blue ducks. These are typically not as common as other duck breeds. It is more common for ducks to be brown, black, or white. Are you sure? I was just talking about a blue duck, and now you’re holding one says, what the quack? He’s got a sense of humor.
Common than I thought. Would this float? That’s a rubber ducky. I’m not sure what material it’s made of, but it looks like it might be rubber or plastic. What if I tell you it’s making a squeaking sound? Oh, if it’s squeaking, it’s definitely going to float. Nice. It is a rubber duck. Why would that made of a material that is less dense than water? Can you teach me how to say this in a different language and give me a few options? Can you explain how to pronounce the Mandarin one? Is it Yatsu? Yatsu is pronounced with a first tone.
In Mandarin, there are four tones which are essential to distinguish the meaning of words. The first tone is a high level tone. Oh, okay. The duck is in the middle of the ocean. There is no rubber nearby for the duck to rest or find food. It is unlikely to find ducks here naturally. Okay. Based on what you see, come up with a game idea. Oh, and use emojis, please.
How about this? We can play a game called guess the country. Sure. Let’s do it. Clue one. This country is the home of the kangaroo, the koala and the Great Barrier Reef. Oh, that’s easy. Clue two. This country loves football and has won the most men’s world cups in football history. Brazili point to Brazil. Now, here’s the. To get me to find the paperball under the cup. I accept the challenge.
The cup to the left. And he gets it right. So that’s pretty amazing how rapidly it’s able to do that, how it’s able to make inferences and think about that. So, again, they are making very rapid advances in this. They point out a calculus regarding the exploitation of advanced AI governs the outlook of senior officials at the Department of State and Defense. They argue that artificial intelligence can and should be used to operate future weapon systems.
They’re fully on board with this. They have absolutely no reservations that are going to stop them with any of this, because, again, this is an arms race. And so you had Bonnie Jenkins, who’s the undersecretary of state for arms control and non proliferation. What a joke was at the UN. And she said, we can’t predict how AI technologies will evolve or what they might be capable of in a year or in five years, but we’re going to put in place the necessary policies to build technical capacities to enable responsible development and the use of AI by the military, no matter the technological advancements.
Do you trust them? These are the people who can’t keep anything secure. At the Pentagon, at the NSA, at the CA, as I mentioned, vault seven, the plans for the f 35, they can’t keep any of that stuff secure. So foreign actors could do this, as well as the other part of this is that if the US develops this, pours all this money, all this taxpayer money, into this arms race, technology will be stolen by the other governments, China and Russia.
Jenkins was at the UN that day to unveil a, quote, political declaration on the responsible military use of AI and autonomy. They called for voluntary restraints. How do you think that’s going to work? How’d that work with nuclear proliferation, those voluntary restraints? The states should ensure that safety, security and effectiveness of military AI capabilities are subject to appropriate and rigorous testing, as they did with EF 35, as they did with their secure databases and other things.
Right. None of this constitutes a legally binding thing in the UN, and there’s no way to enforce any of this. As a matter of fact, several countries have signed declarations to oppose this. One of the leading spokespersons or countries is in Austria. They’re insisting voluntary compliance with a set of US designed standards is not sufficient to protect against dangers posed by artificial intelligence enabled weapons. And so they’re looking in for something legally binding.
Well, good. How are you going to enforce that? The ambassador from Austria said, humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans, but made on the basis of pre programmed algorithms, which raises fundamental ethical issues. I think we’ve had enough ethical issues with the humans, don’t you? We go back and we look at World War I, and if we go back and we look at the slaughter that they kept going with, the trench warfare, they really don’t care about the lives of their own people.
And we’ve seen in the last three years that they have directly targeted us to kill us. What do you think they can do with the artificial intelligence? For years, Austria and a slew of latin american countries have sought to impose a ban on autonomous killing machines, but it’s full speed ahead at the Pentagon. They have. As a matter of fact, deputy secretary of Defense Kathleen Hicks announced the replicator initiative August of this year, saying the US has to prepare for possible war with China’s military in the not too distant future, and we have to be prepared to overcome China’s superiority in conventional measures.
And it’s what they call the military mass. They got a mass of. They got an army of incels, if you want to put it that way, involuntarily celibate people because of the gender side that they did against young girls. They’ve got a lot of boys, the single child policy, a lot of young emperors, spoiled boys, but they’re also looking for women that they can’t find. I said when we went to China, told the boys, this is just a bomb waiting to explode, to invade other countries, to keep these guys under control in China.
And so there’s that. But there’s also the fact that the US government thinks, well, we don’t have enough people, not just because of the disparity in the population between China and America, but also because of what the Pentagon has become. Nobody wants to sign up to fight for it, so how are they going to fix it? Again, they’re going to bring in foreign people who are willing to die for the Pentagon and they’re going to use artificial intelligence, autonomous killing machines.
This is going to be a cane that when they lean on it, is going to pierce their own hand and possibly us as well. So they’re full speed ahead. As she said, we are going to be smashing bureaucratic bottlenecks. If these bottlenecks aren’t tackled, our gears will grind too slowly. Our innovation machine engines still won’t run at speed and scale that we need and that we cannot abide.
So commence another operation, warp speed. Instead of a killer jab, we have killer robots. So the powers that be in both Silicon Valley and Washington, same people who’ve been trying to kill free speech in our society for the longest time, are now working on killing us, have made the decision to proceed with the development and utilization of even more advanced versions of artificial intelligence. Despite warnings from scientists, despite warnings from diplomats, and even despite warnings from people who are at military industrial complex, think tanks like Peter Charay pay no attention to the warnings.
Full speed ahead. These programs cannot be assured that their misuse could have catastrophic consequences. Unless greater efforts made, we may well discover the consequences that will entail. And so, as all this is happening, as we’re going further and further into debt, the arrogant pride of the Pentagon, an official there again meeting at the Atlantic Council, this think tank, the vice chairman of the Joint Chiefs of Staff, Christopher Grady, said that the US can handle wars in the Middle east, wars with Russia, wars with China.
We can do it all at once. We can always also keep up the Monroe policy and the Monroe doctrine throughout South America. Yeah, no restrictions whatsoever. These people think they are God. It truly is amazing. It is truly concerning for those of us who know there is a God who laughs at these people. So AI. Foreign missionaries. Yeah. That’s how they’re going to solve all of these problems of manpower, how they’re going to sustain their empire of perpetual war.
The David Knight show is a critical thinking super spreader. If you’ve been exposed to logic by listening to the David Knight show, please do your part and try not to spread it. Financial support or simply telling others about the show causes this dangerous information to spread favour. People have to trust me. I mean, trust the science. Wear your mask, take your vaccine, don’t ask questions. Using free speech to free minds.
It’s the David Knight Show. .