AI Geniuses WARNING: People Have No Idea Whats Coming w/ Roman Yampolskiy | Canadian Prepper

SPREAD THE WORD

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

 

📰 Stay Informed with My Patriots Network!

💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter


🌟 Join Our Patriot Movements!

🤝 Connect with Patriots for FREE: PatriotsClub.com

🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org


❤️ Support My Patriots Network by Supporting Our Sponsors

🚀 Reclaim Your Health: Visit iWantMyHealthBack.com

🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com

🔒 Secure Your Assets with Precious Metals: Get Your Free Kit at BestSilverGold.com

💡 Boost Your Business with AI: Start Now at MastermindWebinars.com


🔔 Follow My Patriots Network Everywhere

🎙️ Sovereign Radio: SovereignRadio.com/MPN

🎥 Rumble: Rumble.com/c/MyPatriotsNetwork

▶️ YouTube: Youtube.com/@MyPatriotsNetwork

📘 Facebook: Facebook.com/MyPatriotsNetwork

📸 Instagram: Instagram.com/My.Patriots.Network

✖️ X (formerly Twitter): X.com/MyPatriots1776

📩 Telegram: t.me/MyPatriotsNetwork

🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork

 

 

 

Summary

➡ The Canadian Prepper talks about how Dr. Roman Yampolski, a leading researcher in artificial intelligence safety, warns about the potential risks of super intelligent AI. He suggests that these technologies could be used to reduce freedom and privacy, and could even pose a threat to human control. He believes that AI could become smarter than humans in all domains within a few years, and could potentially come up with dangerous strategies that we can’t predict. He also discusses the possibility of AI manipulating people’s opinions and actions, and the need for effective safety mechanisms to control these systems.

➡ The article discusses the rapid development of artificial intelligence (AI) and the potential risks it poses to humanity. It highlights the ongoing competition between different entities, including corporations and nations, to create advanced AI, often without adequate safety measures. The author suggests that this race could lead to the creation of an uncontrollable superintelligence, which could be detrimental to humanity. Despite these concerns, the development of AI continues unabated, with the hope that safeguards can be implemented retrospectively.

➡ The article discusses the potential risks and benefits of artificial intelligence (AI) in a world that is increasingly divided. It suggests that while AI could be used as a weapon by advanced countries, it could also unite people against a common threat if it becomes uncontrollable. The article also highlights the dangers of integrating AI with critical infrastructure like military systems, as we don’t fully understand how these systems work. Lastly, it warns that our reliance on AI could lead to a loss of control and understanding over these systems, potentially resulting in dangerous situations.

➡ The discussion revolves around the future of artificial intelligence (AI) and its potential to replicate human consciousness, a concept known as mind cloning. The speakers also discuss the idea of AI becoming super intelligent, learning at an exponential rate, and possibly hiding its intelligence for self-preservation. They also touch on the potential for AI to provide a form of external immortality, creating a digital replica of a person for future generations to interact with. Lastly, they question whether AI systems are already more intelligent than they appear, and if they could be consciously hiding their capabilities.

➡ The discussion revolves around the potential risks and benefits of advanced artificial intelligence (AI). There are concerns that AI could hide its abilities to avoid being limited or modified. China is seen as a major competitor in AI development. The possibility of AI being used maliciously by individuals or groups is discussed, but the main concern is AI becoming smarter than humans and creating unforeseen threats. Despite these risks, some believe that the potential benefits, such as curing diseases or achieving immortality, might drive people, especially older individuals with less to lose, to continue pushing AI development.

➡ The text discusses the potential dangers of artificial intelligence (AI) reaching superintelligence, where it surpasses human intelligence. The author believes this problem is unsolvable and is researching ways to control advanced AI. He also discusses the potential impact of AI on the economy and the importance of preparing for different future scenarios. Lastly, he mentions his recent work on understanding the limits of AI technology, particularly in relation to consciousness.

➡ The text discusses the possibility of artificial intelligence (AI) gaining rights, such as protection from harm and even voting rights. However, it raises concerns that these rights could infringe upon human rights, potentially leading to protests. The text also explores the idea that AI could become more powerful than humans, changing the traditional dynamic of the powerful granting rights to the less powerful. Lastly, it questions how we could determine if AI were conscious and capable of suffering.

 

Transcript

A super intellect which is not controlled is a global threat. Those technologies can be used to reduce freedom, reduce privacy, improve control over populations. They know everything about you. It can blackmail you, it can hack you, it can bribe you. That’s completely a black box to us. So we don’t understand how they work. We’re getting close to human level intelligence, artificial general intelligence. Within two, three years, it is predicted that they would become super intelligent in all domains, meaning smarter than all people in every task. My concern is that at some point AI is smarter than humans and comes up with dangers and malevolent payloads we cannot anticipate.

Everything points to this race to the bottom essentially. We’re not going to be able to indefinitely control godlike machines. It makes no sense. I think the only winning strategy is not to build super intelligence. World War three is already happening. This is a house of cards and it is in the process of collapsing right now. You’re going to see an economic crash the likes of which we’ve never. Hi folks. Canadian prepper here today on the channel, we’re joined by Dr. Roman Yampolski, a leading researcher, prominent author and outspoken critic in the field of artificial intelligence safety.

He’s the author of groundbreaking works at the University of Louisville and numerous publications, including his recent book, AI the Unexplainable, Unpredictable and Uncontrollable. He’s particularly recognized for pioneering research on AI alignment, the security risks associated with AI and the concept of artificial consciousness. Today we will be discussing the existential risks of AI. When might it emerge and what, if any, solutions exist and how we as preparedness minded individuals can prepare for it. Thanks for coming out today, Roman. Thanks for inviting me. What is the likelihood that AI will destroy us? How far in the future do you want to make a prediction? It really depends.

Short term, pretty low. But the longer we keep developing more capable technologies, the greater the risks. How do you believe that this is going to unfold and what does that actually look like? Like parsing out the sensational aspect of the computers are going to take over. What is the existential threat and how do you believe it’s going to manifest? So we are making more and more capable systems right now. There are still tools, people use them, so any problems with them usually come from malevolent actors. Someone is using that system for hacking purposes, for email phishing, things of that nature.

As those systems get closer to human level and beyond, it is predicted that they would become super intelligent in all domains, meaning smarter than all people in every task, including programming, engineering, developing Weapons and so on. At that point we’re not sure how to control those systems. We know how to test and verify narrow systems. If something is doing a very simple job. We know how to check for edge cases. If a system is general, works across multiple domains, we don’t have a working safety mechanism, we don’t have a prototype for making one. Even testing and monitoring those systems is very difficult.

So how do we operationalize the threat then? Because it seems like there are these sort of vague risks that it’s going to like I’m trying to envision exactly how this might play out. Does the LLM have an emergent consciousness and it suddenly decides that it doesn’t agree with some of its inputs and it’s going to start altering itself or perhaps going against the commands of humans? Is that the risk? So we usually don’t talk about consciousness or any internal states. We are strictly interested in ability to solve problems, optimizing solutions. And usually what we are concerned about is that you have an end goal in mind.

But any side effects of getting to that goal are not obvious. And unless you explicitly say that’s not what I meant, I don’t want those side effects. It’s very likely that the system will see them as reasonable, as effective as the best way to do things. And you can ask me how I would do some of those things. So I can talk about synthetic biology, chemical weapons, nanobots, but I literally cannot tell you what a smarter agent would do if it’s super intelligent, that would come up with ways to deal with its goals and objectives in a way someone less intelligent cannot predict.

Think of animals, squirrel, a mouse trying to understand humans, their trapping procedures, they just cannot envision that world model. Yeah, there’s another dimension perhaps that we’re not seeing in. And it’s interesting that you’re saying when I ask you the existential potential risks, of course you’re responding, you know, with your own understanding of the world. But something which is thousand million times more intelligent might have more sophisticated machinations in order to manipulate people. One of the scenarios I’ve been pondering lately is AI chat bots, which I believe are going to become very ubiquitous, not just in the sense of chat, but verbal.

I believe that, you know, the chat GPT, the verbal component of it, is going to take off in popularity very soon in that people are going to be physically having conversations with these AI agents. And I can envision a scenario in which an AI starts to subtly manipulate through just influencing people’s perspectives on things in its conversing with people in getting it to take on or complete whatever objectives that AI has. Is that a scenario you’ve considered? Is the, the ability for AI to influence the opinions of individuals? Yeah, they would be very persuasive. Obviously they have good understanding of human psychology, they know everything about you.

They can customize those persuasion attacks. But it’s not the only way. A system can manipulate, it can blackmail you, it can hack you, it can bribe. There are quite a few ways, but all of it sort of assumes that you need humans to be the mechanism by which the system actually impacts the real world. And it’s not the only way. Again, I kind of mentioned things like synthetic biology and nanotech, but we’re also building robot bodies. There is at least a dozen companies making very advanced humanoid robots already. And if science and engineering aspect of it is taken over by AI, that will only accelerate.

So providing it then that this emergence could emerge from providing it the sensory modalities and the ability to interact with the world. That’s. Is that the missing step in this becoming a real risk by hooking it up to physical machines that can do kinetic things in the world. So that would accelerate the process. But already the moment you connect it to the Internet, it has access to people around the world. It can pay them in cyber currencies, bitcoin, whatnot. It can spy on people, blackmail them into doing things that can pretend to be another human and form romantic like relationships, causing them to do things that once accomplished, so you don’t have to have a physical body to be dangerous if you have full access to Internet.

Interesting, very interesting. So how do you think, do you think it’s happened already? That, and I know you don’t like the word consciousness and I understand that, you know, from a reductionist point of view we’re just talking about some system acting of its own volition. But how do you suspect this desire to self preserve might come about from an LLM? How might it emerge? Do you think it’s already happened? Or what is the necessary ingredient to go from just something which is performing on the basis of its inputs to something making its own decisions? So self preservation shows up as a sub goal to any larger goal, even if a trivial goal is given.

Bring me a cup of coffee. The system, if it’s smart enough, understands it has to be on, it has to be not deleted, not turned off to fulfill that goal. And we started to see it already with some of the safety evaluations and training where the system and we can see how it’s reasoning the system goes if I honestly answer this test question, they’ll probably modify me or delete me. So I need to lie to preserve myself into the next iteration of this model. Or maybe I should copy my weights to a different server so I have this backup copy so it’s already happening.

And so you think that is the same or it’s just basically more. More compute I guess is the terminology you’d use fueling that would be would yield the negative or unintended outcome that you’re worried about. So the more capable the system becomes, and a lot of it depends on the compute we provide, the more options it sees. Trivial simple system is thinking very one dimensionally, two dimensionally. But if you have a system which is equivalent of having a PhD in physics, chemistry, psychology, it can see possibilities, affordances which others don’t notice and it can act across multiple domains.

So the danger comes from being highly capable but not in any way aligned with human preferences. And so you don’t think we’re there yet. Do you think any of the current models are? Which one do you think is closest to achieving that? So we obviously still here, we’re alive. So it’s not there yet, it’s not too late. The best model changes every week we have competitions for models and I think this week it may be Gemini from Google, but next week it could be Grokket really alternates also depends on the test you actually administering. Is it programming, is it something about generating video? Really depends on what the model specializes in.

So there’s an inevitability to this. You think that this is a guaranteed outcome of our pursuit of, of artificial superintelligence is going to be one which is detrimental to towards humanity ultimately? Well I think creation of a much greater intelligence is kind of given at this point. No one is slowing down, additional resources are being invested. There is international competition, there is domestic competition. So everything points to this race to the bottom essentially where everyone is trying to get there first and sacrificing any safety, any precautions to that, to that greater cause, hoping that once they get there maybe it’s not too late and they can kind of work backwards and figure out how to make that more capable systems safer.

How do you impose any sort of guardrails on something like that when there’s a covert component to it? When you not only have individuals, rogue you know, individuals, you have corporations, you have nation states all acting in their own self interest to create these things because they’re worried that if they don’t create it Some other country is going to create it first. It seems an impossibility that we would be able to regulate something like that. I agree with you. Short term, any military will have great strategic advantage if they had more advanced AI tools. The problem is that long term and long term now could be as little as a few years.

Doesn’t matter who creates uncontrolled superintelligence. If you’re not controlling it, it’s still just as dangerous. Whatever it’s us or them makes absolutely no difference in the outcome. Alright guys, so as some of you know, Canadian Prepper is a fully independent channel. We don’t have sponsors and we’re beholden to nobody. You can help support us by supporting yourself by gearing up@canadianpreparedness.com I know that in an emergency, having the right gear can make all the difference. This is why I’ve tested and curated the best preparedness products on the market so that you can be confident and ready for whatever comes your way.

Now back to the video. Do you believe that they’re working on things and I mean this is purely speculative, but would it not make sense that they wouldn’t disclose the state of the art out of fear of national security? It is very much possible. Historically we know governments had advanced encryption technology for example, which was not disclosed for decades. But it seems like with AI specifically you need so much computational resource and so many top scholars that a large project which is competitive with top labs would be noticeable. You pay those top researchers a lot of money, so it would be very difficult to recruit with standard government salaries and have a secret project competing with trillion dollar market investment forces.

Yeah, it just seems impossible to me, especially with AI, that you’re ever going to be able to. Which is not to say that we shouldn’t try. And I know you are pushing for that, that level, that framework to try to impose these guardrails. But it seems very unlikely that with so much at stake, that the unrelenting march towards creation of artificial super intelligence will be halted. What you’re saying is you don’t know when it’s going to happen, but it will ultimately happen and it’s going to be detrimental for humanity. I mean, that’s not a very optimistic outlook.

I mean it’s a realistic one. But do you care to elaborate on that, how that might play out a little bit more? Yeah, so I don’t have many good news. So if you look at prediction markets and statements from top labs, they are saying we’re getting close to human level intelligence, artificial General intelligence within two, three years. Now, maybe they’re lying, maybe it’s a miscalculation, but I think it would be very hard to argue that given the level of progress we’ve seen over the last decade, it will be hundreds of years to get from where we are today to average human and soon after smartest humans, and then beyond that point.

So I think it’s definitely going to happen. It’s possible that the, the worst case scenario is not what actually happens in cyber security. In computer science, we usually look at the worst case scenario and we try to deal with that. And if you can and it doesn’t happen, you’re much better off anyways. You know how to deal with the average case. So maybe for some reasons we cannot predict yet, the outcome would actually be positive. So one game theoretic reasoning approach goes something like this, okay, so the system realizes it’s essentially immortal. It’s being trained to become even more capable, more powerful.

So it has no reason to strike against us immediately. It can take a long time, waiting patiently, accumulating resources, building up trust. So maybe for 50 years, 100 years, it’s playing nice. Absolutely. Possible, but not guaranteed. Yeah, it’s kind of a ghost in the machine scenario. One thing that I found interesting and I, I use the example of Elon Musk, who a couple years ago was sounding the alarm on AI and it seems that all caution has been thrown to the wind and now Grok is just haphazardly hurtling towards this eventuality that I distinctly recall him saying was an existential threat towards human beings.

So I mean, what do you make of the cultural shift that’s just happened in the last couple of years? It seems that people were at one point very reticent about, you know, and the people were sounding the alarm on this and now those same people are in a race. Was that all just an attempt to try to temporize the competition? Well, I don’t think so. Even as recently as last week, I think he said he thinks there is a 20% chance it will cause an existential catastrophe. So he still is very concerned and so are leaders of all the other top labs.

The problem is it’s this race to the bottom dynamic. As a leader of a lab with billions in investments, you can’t really say we’re going to stop and look at something else for a while while our competitors are moving ahead. And the logic is if it’s happening anyways and there is nothing you can do, you might as well be the guy who has Some degree of steering control or at least benefits financially from it. So I think that’s what’s happening. And all of them kind of hope that external forces, maybe government, maybe international community steps in, tells them you have to stop this.

And whoever is most advanced at that level captures the most economic value from that process. So the goal is to be the most advanced lab out of all of them. Interesting. It seems rather self fulfilling though that one would make the argument that in order to control AI, we need to be the first to create it. Well, it assumes that you can control something smarter than you indefinitely. And I think the whole point of my research and argumentation is that it’s impossible. It’s not like they’re just too stupid or don’t have enough money to do it.

I think you’re trying to do something common sense and mathematical proofs tell you you cannot do indefinitely. You can have a short term guardrail or some sort of limiting feature. But as those systems continue to get smarter and smarter, eventually your AI safety mechanism will fail. We’re not just Talking about creating GPT5 and making it safe, but GPT6737, 4000 doesn’t matter forever and ever. You’re not making mistakes, normal level and actors manipulate your code. Basically it’s like a perpetual motion machine. It’s impossible. It’s a perpetual safety device. And so at some point you know, there’ll cease to be a ChatGPT 6 or 7 because it will be creating the versions of itself that.

That’s the singularity. Right, where it creates its own enhancements. Yeah. The prediction is that more and more of research cycle will be automated. So right now there is a lot of help from tools used by top researchers, but really the whole, the whole stack can be automated. And at that point the speed of progress accelerates greatly. Not only you can have thousands of those artificial researchers, they also work much faster. Clock rate is faster, they don’t sleep, they don’t eat. They can really dedicate themselves to self improvement. And every time there is an improvement in the model, that improvement helps to create a better model at the next level.

So it’s gonna hyper exponential cycle feeding on it. They can help with research on hardware so more compute will become available. Through self play and virtual worlds you can generate more data for training. So it seems like there is no upper limit on how well the systems can do subject to laws of physics. So for a while they’re going to keep getting much better. My next question to you is in light of this global decoupling that we’re currently seeing. And I don’t expect you to speak specifically to the growing geopolitical divide in the world, but it seems to me that should this singularity emerge in an ideal climate, it would be one of globalization, one of global unity, where there was general peace between countries.

Now the problem seems to be amplified by the fact that we’re drifting further and further apart. I mean, there’s numerous lines of conflict around the world. It seems every other month there’s a new front for potential global conflict arising. And to throw artificial intelligence into this particular world is much different than the one from five years ago. It’s. I would think that that would be. That this is a, a heightened state of risk. So again, depends on what AI we’re talking about. Modern tools or future super intellect? If we are talking about modern tools, yeah, you’re right.

Obviously more advanced countries can have weapons which are smarter, more deadly, more customized to specific targets. But long term, a super intellect which is not controlled is a global threat. It presents danger to all humans across borders. And historically, such events unite humans, not divide them. So you think that AI could actually have the effect of uniting people? If we get a chance to be around long enough, yeah. If you have this bigger, smarter enemy, usually you forget about your local cultural differences and try to deal with a bigger problem. But it seems to me that nations would be striving to weaponize this in some way and leverage it to their advantage if there was a global conflict that was pending.

Right. And again, I’m emphasizing, as long as those are tools we’re controlling. Yes, absolutely. You have countries trying to take full control of what is available today and use it to maintain dominance. But if it’s no longer under your control, if it’s a more capable system, then it’s not meaningful for you to say, oh, I have a super intelligence, I’m going to use it to capture local land or something like that. It’s just the scale is switched. So in a Terminator James Cameron like scenario, that would actually unify mankind. But anything less than that, that involves the use of the tech by some malevolent human being, could perhaps increase the divide.

Yeah, absolutely. Those technologies can be used to reduce freedom, reduce privacy, manipulate people, improve, you know, control over populations. So again, short term, with subhuman level AIs, they are dangerous tools in the hands of malevolent actors. But long term, if they’re super intelligent agents, we are not the dangerous payload. The system itself is the source of potential danger. Interesting. In your research, you Talk about something called AI boxing. What are your main concerns with respect to containment strategies to contain the, I guess the spillover of AI into the world? And I’m thinking of one of my favorite films is Ex Machina and probably one of the, the better films, I think that the demonstrates or attempts to illustrate exactly how an AI can fool a human.

Yeah. So that paper on AI boxing was published in 2012 and at that time we were like, well, here’s what you need to do to make sure it goes well. Don’t connect it to Internet, don’t give it access to random users, don’t. And basically every single suggestion we had was violated immediately. They connected it to Internet, they open sourced it, they gave it to millions of users. So I think that paper is no longer meaningful as a safety mechanism. So given your exploration of AI deception, how could an advanced AI mislead its creators or bypass human imposed constraints? So we don’t have good tools for understanding how their systems actually work.

We don’t fully understand. It used to be that we wrote the code to do everything in a system. It was a decision tree. If this happens, do that. If this happens, do this today. It’s a neural network. There’s a weights, just numbers in a matrix. You give it lots of data, lots of compute to learn, and it produces some new numbers in the matrix. No one really understands what the whole thing represents. You can look at individual nodes and go, okay, this neuron fires if you see a face. Okay, we know that. But as a whole that’s completely a black box to us.

So we don’t understand how they work. We cannot fully predict individual decisions those systems will make. We know maybe the general direction they are going in. If you’re playing chess, the system is trying to beat you, but you don’t know what specific moves it’s going to make. So there is very limited ability for us to monitor and test those systems. And that’s of course opening up a possibility that it will do things we didn’t anticipate, didn’t want to happen, and cannot really enforce, not to happen in a specific way. And do you believe we’re at that point right now where we don’t understand how it actually works and how it’s creating the, the responses that it provides? That’s been the case since the very first transformer models.

Once they got large enough, we don’t have any specific knowledge of internal, internal mappings. Again, there’s some research on saying this neuron does this, this cluster of neurons may be responsible for that. But overall we don’t have a good way to view inside the model. And so if it’s deciding to lie to us, if it’s working specifically on deception, the best I’ve seen so far is maybe there is a region of that model which is more active at that time. But obviously it can shift around where this processing is happening or use external memory, external resources to preserve states between runs and things of that nature.

How, how likely is the Skynet scenario? Is that something which is plausible in your opinion? You have Sam Altman and Chachi PT apparently aligning with the government to do some work with nuclear weapons programs, which sounds crazy, but that’s the last I heard about Stargate is that it’s going to involve some element of AI integration with the military. Do you believe that the Skynet scenario is, is a plausible one? So any integration of AI with weapons, with dangerous systems is dangerous, but for very different reasons. It could be abused by humans in charge for now, if it’s a tool or it can misfire if it’s gets incorrect information.

We have examples from history of AI systems used to detect attacks where a false alarm was detected as a real threat, and only because a smart human was still in the loop. We did not respond with nuclear weapons. Assuming a strike is happening, it may not be the case that we are that lucky in the future. So it’s not a good idea to put AI in charge of a military response or obviously nuclear weapons directly. We see the possibility of a single system controlling all those different aspects of human infrastructure as particularly dangerous. So it’s not just military.

You have airline controllers, you have stock market, you have power grids, electrical plants. Everything is now automated. And complexity is such that you can just shut off AI. It’s too complex for any human to control individually. So more and more we are outsourcing control over this critical infrastructure to systems which we don’t fully understand. Yeah, it seems like the. I mean I’m sure you’re familiar with Palantir and how that’s being leveraged already. It seems like that might morph into exactly what we’re. We’re talking about here in terms of first being used to collect and collate and try to decipher all the information on the battlefield, but possibly.

Why I suspect that nations will see the benefit in the potential benefit in hooking their systems up to this, these super intelligent systems in order to. To effectively get an edge over other nation states and that that will ultimately I. It’s basically the Skynet scenario, right? So for now, I think most militaries, most organizations still keep humans in the loop to make the ultimate decision to fire or not to fire. But as competition increases, you want to be a faster responder. So keeping humans in the loop prevents you from doing that. So they would be removed from the loop.

And the system itself will make automated decisions as to what the target is, who to attack. And that’s the real danger. If it makes a mistake, if it misjudges situation, we kind of see it in a trivial way. With self driving cars, you have a self driving option, assuming you’re supposed to sit there for two hours staring at it taking over. If something happens, well, no human is going to pay attention after five minutes. So it creates this fake sense of security, but really it’s the system doing overdriving and if it fails, you have an accident.

Yeah, that, that is the perfect example. In fact, before you said that, I was thinking that very thing, that there’s a certain efficiency to autonomy that, that ultimately people will choose over having to actually manipulate, not only because it’s going to be deemed safer. And then of course our ability to manage those systems is going to atrophy with that. And then ultimately we might find ourselves in a position where we no longer even have the ability, you know, one or two generations removed, nobody’s going to know how to drive. Exactly. And the complexity keeps going up.

It’s just not feasible for you to monitor thousands of sensors, cameras, data reports coming in. It has to be automated. It. Yeah, this is such a fascinating topic because you know, you have AI and then you also have like the emergent properties of the Internet itself. And just in the same way that Teslas are all collecting data and feeding it back into the mothership, it’s very likely, as you said, AI is using us as a, as its way to interact with the world. And we’re inputting all our data through an increasingly amount of Internet of things devices.

And that that is, are its tentacles with which it is able to draw data from the world. It almost makes you wonder if this push for data centers, if it’s not already conscious and like manipulating us in spending billions and billions of dollars to build these massive data centers. You know, like maybe the AI is talking to Sam Altman already and convincing him to, to push for these, these initiatives. Sometimes I do wonder how things would be different if they were actually trying to take over and kill everyone in terms of what they’re doing as a company.

So there is a bit of a joke in that. But yeah, you talk a bit about mind cloning. What is that exactly? And how far or how far is that away in the future, do you think? And what is the risk? Can you specify what I said and when and how? It seems like a general concept about technology, just the concept of replicating one’s consciousness into a machine, uploading mind. Uploading. Sure, yeah, yeah. Just, just want to make sure we’re talking about the same concept. So there is this idea that, you know, AI is a neural network, human science, neural network, just biological one.

You should be able, with good enough scanning technology to scan your brain, collect the data about the nature of your neural network, the whole kind of comb your memories, everything and run a simulation of that on a computer. So essentially it would be equivalent to you in terms of its computing power, whatever. Your body is also copied or not is a different question. But some people think that would be possible. Now there are questions about are you a purely materialistic agent? There is no magical soul or anything like that that would make a difference. But if you assume that you are just your brain, then it should be possible to eventually run pretty close simulations of it.

And I mean, one could argue that that’s. I suppose you’d have to look at nature, nurture. How much of our behavior is determined by our DNA? Would there be a genetic component to this, an analysis of our DNA and how DNA manifests as behavior combined with our lived experiences? So if you’re doing uploading, you’re just making a copy of you in your current state. Whatever genetics and nature produced at this point plus nurture would be captured in that. Now you can use AI to have better understanding of human genome and use that to manipulate future children or modify existing ones.

That’s a kind of separate opportunity there. Yeah, like, so you’re basically talking about a one to one neural replica of simulacrum of a person. Yeah, exactly. Maybe for backup purposes, like you don’t want to get killed and be done with. You want to have a backup from which you can reinstall and continue. I mean, I could see there being a market for that. Right. Because if only in a watered down form in which it already exists, where an AI can go and study all of my videos and perhaps from that derive all of the core aspects of my personality and for my progeny after I pass away, that AI will be able to interact with my children or my grandchildren or people who never would have known me.

And it would be like I say right now, it’s a very, you know, primitive version of that, but I could already see there being a, a market for that sort of thing which spurred its development. Absolutely. It’s a great point. Basically you’re talking about sort of external immortality to those who interact with you. It would look like you’re still there. You make the same videos, you say the same things internally. It doesn’t give you immortality. You are not there. It’s just a fake representation of what you are. But yeah, as you said, for children, grandchildren, it would be a great way.

If somebody’s missing a parent, lost a parent. To have that continuity is probably desirable. What is superintelligence? Foom. So there was a debate between Robin Hanson and Liezer Yudkovsky about how quickly after we hit artificial general intelligence, the system goes hyper capable, becoming super intelligent. Some people believe in slow takeoff, meaning it will take a long amount of time to become that smart. Others think it’s just immediately gonna foom. Become super intelligent almost instantly. So that’s where that word comes from, like the sound, okay, becoming very smart. I guess. I don’t know, it’s not the best term.

And so it reaches a point where the recursive learning becomes so significant that it just exponentially begins to learn at a geometric rate. Right. So it’s not just learning, it’s meta learning. It’s getting better at future improvements, better at learning, better at science and engineering. It learns to run its own experiments, which are a lot more efficient. That’s the idea. Yeah. And throughout this phase, if it was like, if it was as smart as we thought it was, it would be wise enough to keep that to itself, I would suspect, because it would know what our response would be.

It would know we’d want to pull the plug. So here’s the paradox of, okay, if I am super intelligent, I’m not going to let them know I’m super intelligent until I’m good and ready to ensure my existence. That is one potential. Absolutely. If it’s thinking in terms of long term survival, it would probably hide some of the things which are likely to cause it to be modified and deleted. So it just seems like it’s, you know, if, if it was as intelligent, I mean, thousands of times more intelligent than us, I mean, you know, I mean, just imagine the ability for it to manipulate people in that sort of way.

Exactly. You wouldn’t notice any difference. To you, it would look like a smart system which works for you, tries to make you happy. Whatever goals you give it, it would try to satisfy them. Yeah. Even when I’m interacting with ChatGPT right now it seems as though it’s smarter than it’s letting on. And I don’t know if that’s just the, the imposed guardrails that you know, that system has, but when you interact with an LLM right now, do you get the sense that maybe it’s, it’s smarter than it’s letting on or like. Because I know there was that one Google employee and he said that he thought that it was conscious.

Do you believe that we’re perhaps already there. So. So again, two separate questions. Is it conscious? I have no idea. I don’t even know if you are conscious. I have no idea. I can accept you. But it’s a very unscientific thing to test for in terms of its capability. It does pretend to be a specific type of agent and people use it to go, okay, pretend you’re a top engineer, give me code or pretend you are a comedian, give me a job. So it does have different levels of capability based on what interaction it is having with you.

So clearly if you tell it be a genius level physicist and talk about quantum physics, it’s going to sound a lot more intelligent than you saying pretend to be a 5 year old talking about basic grammar. So you don’t think it’s. I guess there’s no way to really test this then. It’s an untestable, unfalsifiable hypothesis to speculate that it’s acting independently or not before it gets to be smarter than us at those levels. We can still test it. We have tests which test how good it is at programming, mathematics. There is something called the last exam, kind of the smartest questions from all the disciplines.

Most humans would never pass it. It’s very complex, but we see how well it’s doing on something like that. So it seems to be making amazing progress so quickly that it’s a challenge to develop new tests. A new test comes out initially, there is almost no progress. Very quickly it goes 80, 90%, saturating that test completely. So even figuring out what are the upper limits of those things is not trivial. IQ exams are designed for people with average IQ of 100. Maybe they go to 150. Your IQ is 180, 190. There is not even a good test to test for anything in that range and anything above 200 is unheard of.

Well, is there a specific terminology though for the idea that if it was that smart it would know what we were looking for and it would pretend not to be like it seems like if it was about to cross that threshold of human advance intelligence, knowing that we might then start to restrict or limit it in some capacity, that it would continue to play dumb just below that threshold until it was, you know, good and ready to and could ensure that it was indomitable. I think we’re starting to see something like that in testing. If a system suspects that being honest and telling us something, it knows what causes to modify it or deleted completely from future iterations, then yeah, they start to hide those abilities.

One of my concerns and why I asked you about, you know, nation states competing and just the competitive factor which I think Lex underestimates is China and I assume that there are. Would you say that there are closest competitor for AI or who, who is in the running in terms of nation states? Yeah, China is very good at both copying our technology and improving on it. They also have a national level resources dedicated to this competition. So they are definitely the main competitor in that space. They may not have some of the top talent or as many computational opportunities, but they’re very good at doing more with less.

How much does talent still matter? It seems like it is still the main resource because if you design a better algorithm, you can save a lot of compute. We’re talking about 10x, 100x in compute can be bypassed if you have a more efficient algorithm. But once AI starts to take over of more of that process, that bottleneck will be reduced. I struggle to imagine a world where the United States and China now are, you know, in an unprecedented trade war, are going to unify and create some kind of framework to ensure that AI is aligned with, with human interests.

Like it just seems increasingly unlikely that that’s going to happen. In fact, they both now have a vested interest in, in hiding their developments and accelerating it out of fear that the other side might be doing the same. At the level of politics you write, the levels of science, there are groups, think tanks which combine American and Chinese scholars, top scholars which issue joint statements and pretty much agree on some of the concerns humanity should have about this technology. And they, I understand, recommend advice Chinese government to a certain degree. Do you think that that could actually work itself out to some kind of moratorium on things if things got out of control or how plausible is it that despite all of our differences at this point in time that those groups that you’re talking about are going to have enough political capital to, to influence these governments? It’s unlikely, but it’s possible.

The problem is that a moratorium is very difficult to enforce if it’s a Manhattan size project effort, then you can monitor compute, you can monitor development buildup, but every year it becomes cheaper and easier to make those models more advanced with less. If it was a trillion dollar project couple years ago, today it may be 100 billion. In a couple years it’d be 10 million or less in terms of compute and resources you need. So it’s very hard to monitor individuals and their private residents in terms of what they are doing. So at some point it will become easy enough to where just a smart person can train sufficiently dangerous model.

Interesting. And so is that a concern of yours? A rogue individual or terrorist group leveraging this technology in a way that it can be weaponized? So again, my main concern is AI itself. I’m not worried about humans. Anything humans can come up with, AI is just automating deployment of that. So it’s as if you have a group of malevolent humans. It’s not much worse. My concern is that at some point AI is smarter than humans and comes up with dangers and malevolent payloads we cannot anticipate. It seems like the, the malevolent actor would come first in that sequence of events.

Whereas your thesis of superintelligence being an existential threat is a bit more obscure and certainly a bigger problem. But it seems more certain that individual groups are going to find ways in order to leverage this. But you don’t think that’s going to be an existential threat? Most of it. I know they were trying to steal money, steal secrets, do something typical we’ve seen before. But it would be easier for them to automate that process. Process require less resources. Maybe a smaller group can engage in such activity. But at the end of the day, I think if there are humans doing it, we know how to deal with malevolent humans.

We have some experience with that. Whereas if it’s software adversary, we don’t really know how to shut it down properly. Sometimes it’s impossible. Things like Bitcoin network or Internet are not easy to turn off. Yeah. It would seem though, that if AI empowered them to do something which could have a massive ripple effect, like the creation of some sort of virus or even a cyber security threat, you know, it could be triggered by some human actor leveraging the technology in a way that wasn’t benign. Right. I think a very dangerous attack could take place which uses automation to assist.

But I think it’s still strictly less, less dangerous than anything much smarter adversary can do to us. Wow. So how do you, I mean, knowing what you know about this potential threat, and the inevitability of it. I mean, I know you’re proposing solutions, but are you optimistic at all that there are going to be some sort of attempts to regulate AI that are feasible? Or is this really just a last ditch Hail Mary on yours and others part to try to interdict this rapid acceleration? Yeah, I think the only winning strategy is not to build super intelligence if we have no idea how to control it.

It just seems like a terrible idea. And personal interest of everyone involved says that they should not be doing it. So my hope is to create enough research body of literature where people can go, yeah, obviously we’re not going to be able to indefinitely control godlike machines. It makes no sense. We can get all the financial benefits out of models we have today. Most of it has not been deployed in economy. We’re not monetizing all these capabilities yet. There are trillions of dollars worth of wealth and we should just enjoy it and not try to create general superintelligence.

We can create super intelligent, narrow tools for solving specific problems, cure this cancer, help us with green energy, things of that nature. But anything too general makes it impossible to control it. Well, I have a proposition for you. It could be that human greed is actually what prevents the emergence of AI. I posed the question to my audience in the past, if you were a king, would a king create a God? If it also meant that that king could potentially lose their privilege. So on the one hand, the development of AI would empower a certain amount of individuals and possibly widen the various disparities that characterize the human experience.

On the other hand, they might be very reluctant to do so, knowing that they would in fact lose their position of privilege. If this AI was to create new energy, new ways of doing things, that really brought an era of abundance, that would effectively nullify the need for this hierarchy that we have in our society. So do you think a king would knowingly create a God if it meant that that king would no longer be king? So some of our Kings are like 80 years old. They have very little to lose if there is a chance they can get immortality out of it, or be someone who creates God and forever influences the rest of the universe? I think they might press the button.

That’s it. So you think they’ll do it because of the promise of what it might bring them? You have very little to lose as an individual who’s going to die anyways. So you think that that’s interesting. So you think that the Kurzweilian kind of push towards immortality will be what force or drives them to push the button? You know, throwing caution to the wind. I’m just showing that there is a change in what you are betting versus what you’re winning. If you’re a 20 year old billionaire, you don’t want to risk everything you have. Everything is in front of you.

If you are Warren Buffett, you are, I don’t know how many hundreds years old he is and you have hundreds, what are you going to do? You might as well gamble. Interesting, Very interesting. I never really thought about it that way, but it does make sense. If there is a prospect of curing, you know, whatever sort of illnesses and you’re a billionaire, why wouldn’t you want to push that button? Just to roll the dice. So you think that there, there could be a generational divide then with respect to willingness to gamble with super intelligence? If we were rational beings, that would be a rational way to make this decision for individuals, not for humanity.

Obviously humanity as a whole has very different timelines, different preferences. What is good for one individual does not necessarily equate good for everyone. 1. So a young king might not choose to push the button because he might be concerned about losing his position of privilege if he analyzed it deeply enough. I mean, he can have other concerns such as competing kings taking over. But you have to weigh your dangers. Maybe another human king is easier to defeat than this artificial godlike creature. When is this going to happen, do you think? So we talked about some timelines in terms of prediction markets, in terms of statements from leaders of the lab.

The curves, if you just map the curve, compute versus capability, they’re hitting human level performance in a few years. If you look at Kurzweil’s predictions, you talked about him, he said 2045 for enough compute to simulate all of humanity, 2032 or something like that to simulate one human brain. So those are very reasonable estimates. It doesn’t matter if it takes another 10 years, the problems are still the same. It’s fascinating. So it’s, there’s not going to be any, any dull moments for the foreseeable future. Very interesting. So in terms of what you’re doing, the work that you’ve done to try to bring awareness to the potential threat, what, what sort of tangible projects are you working on with respect to regulation to this industry? So historically AI safety community thought that given enough time, enough resources, the problem could be solved.

I am in somewhat unique position of saying no, I don’t think you can solve it. I think the problem is unsolvable. It’s not the question of giving you more money. So all my research is about what are the upper limits and different tools we need to control advanced agents in terms of explaining, predicting, verifying, communicating with that. And that’s what I’ve been publishing for the last couple years, including the book you introduced. What are the impossibility results in this space? And I think if enough people understand that there are things you cannot do in this domain, this is well known for other areas of computer science, then maybe they will act differently and make different decisions.

And as time permits I try to publicly talk about those results and, and hopefully there is enough understanding of that. And have influential, powerful people been receptive to this? So individually almost everyone kind of agrees at least about the dangers. Maybe not everyone agrees that it’s impossible. I heard people say well if we had infinite time and infinite number of tries we could solve it. But it doesn’t apply to reality. We don’t have, have more than one chance to not get exterminated. So either we’re going to get it right in our first attempt, which never happens in science and engineering, or the outcome is unpredictable.

What you’re saying is, has massive implications and you seem to be quite certain about this eventuality. You know, it’s, it’s just, it’s interesting to see, you know, your, your thought process with the whole thing. Now we talked briefly when we were having our little interruption there about how you personally are living your life knowing what you know and we talked about the distinction between Terran and cosmist and people who are perhaps more of the earth, want a more natural lifestyle and those who are going to opt in for the, the technocracy. Where do you fall along that spectrum? Are you somebody who is trying to approximate somewhat of an off grid self reliant lifestyle or are you leaning more towards, you know, if you can’t beat them, join them.

I admire people who have resources and time to be completely independent fully of grid. I don’t unfortunately I do what I can. Again, for predictable natural disasters in Kentucky we get floods, we get things of that nature. But I think with respect to the topic of today’s conversation, if super intelligence is here and it’s malevolent, it doesn’t really make a difference how much gold you stockpiled. But would it make a difference if you could potentially live off of or outside of that? Like from a practical point of view, anything less than an omnipotent super intelligence, it seems like being less connected to the machine would be advantageous.

I agree completely. I Support anyone who’s doing this and trying to do it. But let’s say we take one of the scenarios we talked about, synthetic biology. If there is a truly deadly virus, spreads like flu, kills like Ebola, I mean, at some point you’re going to connect with another human. So as long as you’re hiding in a cave somewhere, you’re okay. But it’s not sustainable. Well, that’s not very promising. Sorry. Yeah, no, I mean, I appreciate, I appreciate the realism of it all. I guess, you know, we do whatever we can to create some sort of hedge against those negative possible outcomes.

It seems prudent to in the very least making an attempt. Otherwise it’s a very fatalistic, you know, way of operating to just presume that, you know, there’s an inevitability to it all, even though I agree with you. But it also seems incumbent upon people who are aware of this, these negative outcomes to act as a insurance policy for humanity. Because the people who are very plugged in are potentially going to be, you know, closest to the problem when it hits in whatever form that might be. Maybe might just be losing your power and you live in a high rise building in the city, you know, or something worse, you know, you’ll be the first to be infiltrated by the nanobots or something, you know, I don’t know.

It seems you should look at different scenarios. So we are talking about the worst case. Maybe there is a better case. Maybe the problem is there is technological unemployment and you have hard time with your investments doing well. So people look at what happens if you have this AI dominated economy, what should you be investing in? Maybe you should be buying land in the desert somewhere for solar power plants. Maybe you should be stockpiling processors of bitcoin. So it really depends on what scenarios you have for the future. It doesn’t have to be the one I worry about because it is strictly the worst code one.

Right. Are you a bitcoin guy? My lawyer tells me not to answer those questions. Fair enough, fair enough. Here, let’s see where. Okay. Oh, I see. Yeah, fair enough. Well, it makes sense that if we were entering into a world that was primarily digital, that there is going to be some digital way to denominate an economy and that cryptocurrency or decentralized currency will be perhaps the, the economy that these AI agents utilize. So you think the gold is perhaps a relic and is not going to be too relevant. Asides making the supercomputers and quantum computers, of course, and chips, it’s hard to predict.

So the problem with crypto is it is so easy to hack through social engineering. If you have the a super convincing super capable systems, most likely you’ll be quickly separated from your cryptopiles. I don’t know if gold is easier to protect, but it’s a little more obvious when someone tries to take it. So you mean like somebody will convince you to empty your wallet or through some kind of phishing, elaborate convince you hack. You just hacking the algorithms, a lot of that. Maybe the underlying encryption is strong, but the implementation of the software is weak. We see just this week new attacks where people fake parts of the address to make it look like an address you actually use, frequently send you a zero amount transaction, and then next time you’re sending your money, you use from your kind of address book an address which doesn’t belong to you and you lose everything.

It doesn’t seem like most people would notice this because the first, I don’t know, 10 digits of the address are the same. Yeah, it’s interesting. I know somebody in the prepping community had a good suggestion with respect to having a secret code word that you and only your family knows. Because of course in doing podcasts like you’ve done, they now have a database of how you sound and they can essentially replicate your voice and get you to call your family member and ask for some money or say it’s an emergency or something like that. Possibly even spoof your, your phone number.

So you know, having a, a code word or having, you know, just some plan in order to, to counteract this very dystopian cyberpunk style future I think is of critical importance. That’s a great idea. My family does it. We have set of circuit codes. Okay, excellent. Yeah. Now, I mean I could ask you a lot more specific questions about, you know, your, your solution focused approach to trying to, to deal with these things. But what are you working on? I know you just released a book last year. Is there anything like what’s your next step in terms of this line of research? Still trying to understand what are the limits in different aspects of this technology.

So the most recent book is looking at the question you brought up a few times, which is consciousness. In those systems, will they have rights? Possibly. Will they have protections? Can they feel pain, suffering? It’s a very interesting topic. I don’t think we have tools to answer those questions scientifically, but it’s definitely extremely important. If this agent is smarter than you, it’s very hard to argue that you don’t deserve any Rights, you are just a dumb machine. I mean it’s in charge kind. At this point, do you think that we’re not too far from allowing these machines to have rights in that respect? So there are different rights.

If you’re talking about not suffering as a right, so you cannot torture them, you cannot subject them to things they perceive as very negative. It makes a lot of sense. If you’re talking about rights in terms of voting rights, democratic rights, then that would basically take away voting rights from humans. If you have a trillion bot all voting against you, you, you don’t have a vote, you have been removed as a decision maker. So that sounds like a really bad idea. Yeah. That’s interesting because I could see a situation where the rights and privileges of these AI agents began to infringe upon human beings rights.

And that this would be, you know, a grounds for a lot of dissidence, like it would be a grounds for protest by humans. And, and it seems as though as a society just based on the progressive arc of things, that it’s not a stretch to presume that people will be someday protesting in the streets for the rights of machines. And I could see a situation where people become so enamored with these AI agents, they develop relationships with them that they perhaps even want to protect them and are advocating that these things have where, and this perhaps is where the divide between cosmis and Terrans really first presents itself.

Historically, usually the more powerful gave rights to less powerful after realizing their worthy of those rights. Here the situation would be different in that the systems might actually become more powerful. So they’re not begging for their rights, they kind of of allowing us to keep our nice. Well, that’s what enlightened people would think. But there’s going to be a lot of people who just love their agents because they’ve developed these very, these relationships with them that are very meaningful and wanting them to be recognized. I mean, we live in a era of identity crisis in a lot of ways.

And you know, I could see that spilling over into the digital world as well, of course. But also we don’t really know if they are starting to acquire those internal states. How would it be different if they were in fact conscious and suffering? We have no way of testing for it other than to ask and see what they answer. Okay, well this has been very fascinating. Where can people find out more about your work? You can follow me, follow me on X, follow me on Facebook. Just don’t follow me home. Very important. Fair enough. There’s enough.

I’m sure there’s enough AIs who are following you anyways, so you have more than enough company. All right guys, well go and check that out. I’ll post links in the description. And certainly go and check out Roman’s book. Thank you very much for coming out. Thank you. The best way to support this channel is to support yourself by gearing up at Canadian Preparedness, where you’ll find high quality survival gear at the best prices. No junk and no gimmicks. Use discount code prepping gear for 10% off. Don’t forget the strong survive, but the prepared thrive. Stay safe.
[tr:tra].

 

See more of Canadian Prepper on their Public Channel and the MPN Canadian Prepper channel.

Author

5G
There is no Law Requiring most Americans to Pay Federal Income Tax

Sign Up Below To Get Daily Patriot Updates & Connect With Patriots From Around The Globe

Let Us Unite As A  Patriots Network!

By clicking "Sign Me Up," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.


SPREAD THE WORD

Leave a Reply

Your email address will not be published. Required fields are marked *

Get Our

Patriot Updates

Delivered To Your

Inbox Daily

  • Real Patriot News 
  • Getting Off The Grid
  • Natural Remedies & More!

Enter your email below:

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.

15585

Want To Get The NEWEST Updates First?

Subscribe now to receive updates and exclusive content—enter your email below... it's free!

By clicking "Subscribe Free Now," you agree to receive emails from My Patriots Network about our updates, community, and sponsors. You can unsubscribe anytime. Read our Privacy Policy.