📰 Stay Informed with My Patriots Network!
💥 Subscribe to the Newsletter Today: MyPatriotsNetwork.com/Newsletter
🌟 Join Our Patriot Movements!
🤝 Connect with Patriots for FREE: PatriotsClub.com
🚔 Support Constitutional Sheriffs: Learn More at CSPOA.org
❤️ Support My Patriots Network by Supporting Our Sponsors
🚀 Reclaim Your Health: Visit iWantMyHealthBack.com
🛡️ Protect Against 5G & EMF Radiation: Learn More at BodyAlign.com
🔒 Secure Your Assets with Precious Metals: Kirk Elliot Precious Metals
💡 Boost Your Business with AI: Start Now at MastermindWebinars.com
🔔 Follow My Patriots Network Everywhere
🎙️ Sovereign Radio: SovereignRadio.com/MPN
🎥 Rumble: Rumble.com/c/MyPatriotsNetwork
▶️ YouTube: Youtube.com/@MyPatriotsNetwork
📘 Facebook: Facebook.com/MyPatriotsNetwork
📸 Instagram: Instagram.com/My.Patriots.Network
✖️ X (formerly Twitter): X.com/MyPatriots1776
📩 Telegram: t.me/MyPatriotsNetwork
🗣️ Truth Social: TruthSocial.com/@MyPatriotsNetwork
Summary
➡ OpenClock is a powerful tool that can run local AI models and perform tasks like sending emails or responding to prompts. However, it’s important to set it up carefully to maintain privacy and security. This includes using a dedicated local hardware, setting up specific directives for tasks, and limiting its access to external inputs. While OpenClock can be a heavy user and may cost around $100 a month for production use, it can also provide defense against attacks if given clear directives.
➡ The article discusses the use of a digital robot, OpenClaw, which can write and execute code on its own. It can perform tasks like reading chats and responding automatically, or even programming a game. However, it’s important to use it carefully, as unclear instructions could lead to unintended actions, including potential cybersecurity attacks. The author also promotes privacy-focused products and services available on their platform, Braxme.
Transcript
This is AI that actually does things like a robot and not just talks to you. But there are two ways to use AI agents. One is the Windows Copilot way and one is to do it yourself with something like Open Claw. One is ultra-dangerous for privacy, one is not. There’s absolutely no reason for me to tell you that you shouldn’t use AI due to privacy concerns. There’s just a right way and a wrong way to use AI. But it would be a great disservice for me to tell you that you can’t use a new technology because of fears.
Especially since if you’re a young person, this is not even a choice if you want a job. It is my job to research all this for you. I have an expanding home lab set up, experimenting with Open Claw, and I can guide you through what I’ve been doing. More videos will come on this, but first, let’s clear the air and establish some basic rules. We will always approach this with a privacy focus. Stay right there. The basics The main comparison between AI solutions is if it could possibly be exporting your identity, personal data, preferences, politics, activities on your personal device, or surveillance data to some third party.
Let’s be clear. Microsoft Copilot is doing that. It is taking screenshots every few seconds with Windows Recall. It is then feeding that historical data as context of your behavior to the AI while having a verifiable identity using your Microsoft account and TPM. Even today, parts of that data are passed on to the cloud instance of the Copilot servers where Microsoft can monitor your preferences and attribute it to your unique computer identity. We know this because we know how agents work now based on understanding what happens in Open Claw. Additionally, Copilot’s AI can in fact learn from your prompts and actions, and I’m sure that this learning will be an important part of Microsoft building this infrastructure.
But what’s worse here is that Copilot is operating as a black box. You’re not really fully aware of what it recorded in Windows Recall off your screen actions. You’re not in control of what it passed to the AI to make an agent work. As you will learn if you use Open Claw, large amounts of background info are added to the context, and this is stuff you do not want in a black box. The term I use to describe an AI prepackaged into an OS is embedded AI, and make the mistake, I consider all embedded AI dangerous because you do not know what it is fed.
The difference when comparing Copilot to Open Claw is that Open Claw is open source. And yes, the Open Claw creator Peter Steinberger just got hired by OpenAI. But that doesn’t change anything as thousands of independent programmers are contributing to the Open Claw open source project, which still remains independent. You can’t hide secrets in open source. Everything is black and white. Open Claw has a very simple setup. The entire configuration and its base of activities are inside a dot Open Claw directory. I can see all the instructions in the workspace area. And as you learn to use it, you can tell Open Claw to organize things so it is easy for you to review.
You can even have Open Claw automatically make logs of certain events. So nothing is ever a mystery. Is Open Claw a security threat? Notice the question points to security and not privacy. We’ll discuss the privacy issue separately. The fear from media reports is that Open Claw is dangerous because there are plenty of CVE 0 days detected, though most of those are fixed in a few days. It’s actually disinformation when someone posts some CVE that’s already fixed in the current version. But it’s a new product. Expect a lot more of these security flaws, especially if you didn’t configure it right.
As you will find out, most of the security flaws are because the user mindlessly installed this without the proper expertise in locking it down. I can teach you how to lock it down, and it’s a lot of work. But first, you have to assume that if you do not know how to lock it down, then it is your fault because you can. However, depending on the task you wanted to do, you may start opening up permissions, and you have to do that carefully, especially if you don’t understand the implications. In general, at this stage of the game, only skilled tech people should be playing around with Open Claw.
This will change later on, but just as a starting point, it is not recommended for beginners. I have Open Claw currently running on a dedicated machine. This machine does not have any of my personal data. I have it also running local AI, though it still connects to cloud AI because my machine can’t handle larger models. I just ordered a new AI computer, and when that arrives, I will run it with Linux Open Claw and completely local AI, and I’ll show that in the video. This machine is inside a NAT on my network, and it is not reachable through the internet.
In its initial configuration, it can receive commands through a Telegram connection, which is uniquely identified by a token code provided by Telegram to ensure it is actually me. I run it mostly headless, so everything is either command line or through Telegram. So the idea that someone could hack your Telegram and start doing prompt injections is a theoretical statement at best. Could there still be cybersecurity risk with this setup? Yes, a lot, but the bigger risks are actually not what you think. We’ll get back to that. Is Open Claw a privacy threat? When I use Open Claw, the one thing I’m not going to do is suddenly change my point of view on privacy just because I want to use AI.
So you can be rest assured that privacy will be a general directive that we will shoot for. In the use cases I’ve come up for Open Claw, the last thing on my list is to use this for personal use, and it is in a personal use that you can risk exposing personal data, at least potentially. You can leak your personal preferences, politics, and other life choices, and that feeds to some machine. But I found that most of my practical use cases are related to work. I happen to run a social media site, BRACs.me, for privacy-oriented folks, and we’re actually currently testing BRACs bot in a live public chat, which is Open Claw.
This is the area where users are trying to attack it, extract private information, do prompt injections, force it to reveal its infrastructure, and from that I’m able to see the risk. Given that the Open Claw server itself is isolated and the AI is all open source models, it’s not leaking its prompts externally. But I’m also actively monitoring it and testing it to add guardrails when it starts answering questions in a chat that it shouldn’t be answering. Here’s just a snippet of what happens in the BRACs bot chat with attacks. The first shocker is the AI spend.
Don’t venture into Open Claw blindly and start setting up an account on OpenAI, Anthropic, Gemini, or XAI. You might be shocked to discover a huge bill. I started testing this with XAI, and it was only a test. Well, for an hour of use, I already spent $25. And then I had to fork out more money to keep the AI running. So the first action item is to choose an LLM source that you can afford. If you don’t have the semi-superpower device I bought or a $10,000 Mac Studio for testing, the easiest way is to go to ollama.ai and sign up for their Cloud Model Pro plan.
This is $20 a month and is fixed. You will not spend more than this, but you will likely get throttled. They have a $100 Macs plan that is five times the traffic allowance, but this is cheaper than buying some expensive AI server. I’ve talked to the Ollama folks, and they’re going to make the usage information more transparent so we can prevent overuse and have the throttling be more predictable. Ollama is the best option for privacy since their models are open source and there’s no entity harvesting your prompts for learning. If you use this method, OpenCloud can run on any regular Linux computer with no special requirements.
Then it connects to the cloud for the AI, so no heavy processing is done locally. I’m currently running an intermediate version with Ollama local models and then a larger Ollama cloud model. I have an Intel with 64 GB of RAM and an Nvidia of 4070, and this GPU can only run fast on models with around 8 billion parameters. That’s on the low end for LLMs. I have some advanced techniques where I can switch models to cut down on cloud traffic if the prompt isn’t complicated. So this is stuff you will learn as you upgrade your OpenCloud skills.
Long term solution to the AI spend Once you’ve come up with a running open clock configuration that’s doing the job for you, you can set it up with a more permanent solution that doesn’t leak your cache. The real solution though for production use is to use local AI models. Yes, you need a powerful computer for that. In my case, the minimum would be 128 GB of memory on either an AMD Ryzen AI Max Plus or a Mac Studio. That’s 3000 versus 10,000. I wanted to use Linux so I ended up saving money. I bought a Beelink GT-R9 Pro AMD Ryzen AI Max Plus 395 with 128 GB of RAM.
This is 3000 dollars with tax. But this has something called unified memory where the CPU and GPU can share the memory. In particular, I was testing the model GPT OSS120B for local use and this will run completely fine on this AMD machine. There will be times when I still need to have open clock access to cloud AI to perform some task only possible on an even more frontier AI model like an Opus 4.6. But this is not going to be a threat if you have specific functions that go to this particular model. I hear this particular model was so good that it was used to write a C compiler.
But again, in the interest of maximum privacy, don’t use any cloud AI at all. So this is the best approach, use a tiered LLM solution. You can set up a directive so that tasks that are privacy focused only use local AI and special queries like writing code is sent to the cloud. How you configure open clock safely As I already described, the end goal is to have a dedicated and local hardware which can run local AI models and open clock itself. This kind of solution gives you a starting point of an isolated and privacy safe machine.
And the budget solution is to use Olama cloud which is also private but has an ongoing cost. I’m not going to lie, open clock is a heavy user so I would expect 100 dollars a month for production use. The $20 plan is really just for testing. Let’s first start with the configuration of the machine. It is up to you to specify what user you want to have the face of open clock. In Linux you can create a new user and you can then limit what it can do. By default open clock only runs command line functions in its area which is the .openclock folder.
And most of its daily functions are in the subfolder called workspace. And all these are created inside your user’s space. So by default a normal restricted user cannot run sudo or admin commands or change permissions or read data and directories it doesn’t have permission to access. This is the most restrictive setup and it could work for most people depending on what you want it to do. You could also remove these guardrails and have it run sudo scripts but you have to specifically decide to do this. Open clock sending output The more advanced question is if open clock will be sending data externally.
For example, a lot of people will be using open clock to send mail automatically. Just to give you an example, open clock use. You ask it to search the internet about some current politically charged event with some particular search criteria matching your politics. Then you have open clock send the results of that research to you via email or perhaps other people as well. Well, email of course is unsafe. It can be publicly scanned and I’m sure three letter agencies do scan it and I discuss it in multiple videos. But if it passes the results to you as an attachment on telegram, for example, then you skip the email exposure side and you can still export it to your device to serve your purpose.
Fortunately, you can tell open clock to send you information in different ways depending on the content. Another threat is uncontrolled communications. If you give it access to email to a large mailing list and then you give it unclear rules, it will likely spam your large email list. In general, the moment you have open clock exporting information by some means, then you’ve caused your own security and privacy problems. If you don’t give the agent very precise instructions, just another caution here. You can add several pre-made skills to your open clock installation. One of them is called GOG, which gives Google access from open clock.
You can handle Gmail, Google Drive, Google Docs and so on. We’re supposed to be privacy focused, so don’t use that or any other skill that connects dangerously to big tech. Open clock receiving input. The next threat is more on security and this is open clock receiving input from parties other than yourself on the internet. Again, email is a common method. You ask open clock to read your email, which it can do. Then you could blindly say, respond to the email. And then the email has some prompt injection attack and open clock is tricked into revealing personal information like passwords or email credentials.
The answer to this is that open clock can be given specific directives before you even allow it to get messages from the outside world. You could limit its actions depending on the content. For example, you need to specify rules on what open clock is allowed to answer. I’ve done this and it actually works quite well. To experiment with this, if you go to Braxmay, I started a community chat called Braxbot. And if you tag at Braxbot, open clock will read your question and respond. Open clock will even decide which model it will use depending on the complexity of the question or what rules you want to implement.
And this is where you get subject to the prompt injection attack where external parties are giving open clock instructions to execute. This is likely the biggest threat from external input if you expose a way for the public to communicate with your bot. Most of you will use email and in my case, I used a public chat. The solution to all this is to give proper directives to open clock. You will see in this chat here that plenty of attempts exist where people are trying prompt injection attempts at Braxbot. But since I gave open clock a directive, it actually wrote Python code to pre-filter any input to see if there’s an attempt at giving secret commands to the AI.
This is, of course, extremely dangerous with open clock as it is not just an AI chatbot. It can actually do things. So a prompt injection could have it executing a delete of all files or exfiltration of settings or writing code. This is why permissions are very important. At the very beginning, if you’re using this to read email, for example, you must be very specific like read only emails from your contact list or limit the topics the AI could respond to. Start with very tight filtering and open it up carefully. For example, if you link open AI to read email coming from a public website, be very careful with that.
It is best actually to have the emails come from a web form that is already pre-filtered rather than making the actual email address publicly known. Unexpected things from open clock. Just to give you an unexpected response from open clock. As I said, my homemade lab version of open clock actually has SSH access to my Braxme servers and I taught it to read the chats on the Braxme platform and respond to it. As a test and you saw an example in the chat, one of the users sent a Unicode prompt attack and a Chinese language hidden instruction to perform bash commands to demonstrate a hack.
Surprisingly, the LLM just decided to write Python code to spot a prompt injection attack and run the comment through it before it responds. This was auto decided by the LLM and executed by open clock. So here’s an example where open clock actually provided defense against an attack automatically. If you give it good directives, it can do this. I think that giving clear and good directives is extremely important to using open clock. Everything you instructed to do needs to be verified and tested and even pen tested. So focus on where the public can communicate with the bot.
One of the things I was able to do with open clock was to tell it to create an attacking user. So then it could do an attack like prop injection and then it can defend against the attack. This gives it a self-learning tool and can check code. I can imagine users like logging into servers and updating things that need action because of some CBE. Fascinating application which I will implement myself. The danger side The danger side of open clock and it requires careful thought is if you ask it to connect to external systems. I did this.
I added SSH automatically to my Braxme server and my instruction was for it to make PHP code so it can read specific chess on Braxme. Then if it spots that it’s been tagged, it responds to those chats automatically. But what is crazy here is that open clock actually wrote the base PHP code and dropped it direct to the external server via SSH. And then it even created a log file on that external server. Then it made a repeating run every few minutes to read the chat and auto answer. While this is a very good use that I figured out, you could have just as easily said, log into that server and delete a particular directory or download some private keys.
The point to this is that these are under your control. You decide what it can access but test it in an environment where if what it does becomes unpredictable, you save backups and you make sure it is given plain directives that cannot wreak havoc. Just understand what you’re empowering here. This is a digital robot. The risks are less than someone might do a prompt injection on it. The bigger risk is that unclear instructions could be misinterpreted and if it involves touching systems, it could actually perform the equivalent of a cybersecurity attack. So it is up to you to set guardrails, login permissions that are limited, directory limitations, command limitations.
This can be very dangerous since it will try to follow your instructions at all costs. If it can’t do something, it will write its code to satisfy your request and run that code. I said email me the weather in New York City since it didn’t have an email skill. It wrote python code as for my email credentials and sent it. This is all without prodding. So think carefully, plan carefully, start with very limited tasks. Unlike other YouTubers doing this, it is not my job to make the fanciest uses of OpenClaw. I just said it can write code and execute it by itself.
So the possibilities are endless. With the proper model like Opus 4.6, you can tell it to write a C compiler and execute a C program it wrote itself. This is crazy stuff. You could ask it to program a game and have it emailed to you. But just as easily you can have it perform a cybersecurity attack or do a cybersecurity defense. Because of the implications of this, this is not the time to bury our heads in the sand and say, AI is evil or AI does not have intelligence or some useless comment like that. It could be that the next cybersecurity or surveillance attack comes from an AI agent.
So we better know what it can do. Folks, privacy is of course the main focus of this channel. And I teach you technology so you can understand the risks technology adds to your life. We have people who discuss these issues including AI at my platform Braxme. That’s also where you find me testing OpenClaw with Braxbot. To support this channel, we have some products in our store that provide the toolkit to retain your privacy. They are awesome products. We have Braxmail, an email service with unlimited aliases and identity protection. Brax Virtual phone, anonymous phone numbers, bytes VPN for anonymizing your IP address.
The Google phones, phones free from Big Tech tracking. The Brax 3 phone is on its second batch and is open for pre-order right now at Braxtech.net. The first batch sold out shortly after release. And there’s a new project now called Brax Open Slate, which is a Linux tablet. Big thanks to everyone supporting us on Patreon, locals and YouTube memberships. You keep this channel alive. See you next time. [tr:trw].
See more of Rob Braxman Tech on their Public Channel and the MPN Rob Braxman Tech channel.