Here is a quick recap of my real-world experience installing and using OpenClaw.
TL;DR Took a while to install and configure, even longer to connect to Telegram. Then ran out of TPM quota from Google Cloud's Gemini API after <100 chat interactions. Had problems and potential security risk due to hallucinations.
I set up a "claw" user account (not admin) on my MacBook Air to isolate it. Looked for and implemented every security measure I could find. Openclaw is open and it has major security risks.
Perplexity was my assistant to give me step-by-step instructions.
I asked it to sum up my very limited dev competencies:
Reda is an experienced systems administrator transitioning to modern cloud-native and Node.js-based tooling. Provide explicit, copy-paste-ready commands with brief inline comments explaining why each step matters. Include common failure modes and their symptoms. Assume strong Unix fundamentals but explain JavaScript/Node ecosystem conventions (like where configs live, how env vars override files, and when to use `sudo` vs user-local installs). Always show verification commands after each major step. Security-first approach is essential
This is very generous. Last time I coded anything was in C++. That was in 1990. And I constantly have to look up basic Unix stuff.
The biggest problem is that even Perplexity gets confused by recent but now outdated info about install, configuration, etc. This is probably due to the velocity of development for Openclaw.
For example, its first recommendation was to use npm to install. Turns out the curl command that sits on the http://openclaw.ai home page is much better.
The next problem was how to connect Telegram and Slack. After much wrangling, my claw answered on Telegram.
That was the best moment.
Within a couple of hours, I got two take-aways:
1) It really is transformative
2) It hallucinates and fails at basic stuff.
3) When it fails, in a few instances it found a fix and seemed to successfully implement it.
For example, when I explored how to allow it to view files in my Dropbox and upload its outputs, it claimed it was going to install "dropbook", a plugin that does not exist.
That was scary, because it then said it downloaded the repo for dropbook and installed it, then claimed it had removed it. When asked, it gave me a link to Moltbot plugin site that, of course, has nothing called dropbook.
Then I hit the TPM wall.
My Google Cloud is Tier 1, which just means I have a credit card linked to my account.
My rudimentary understanding is that "TPM" is a measure of how many tokens you have used. My quota is 2 million.
Yes claw used that up in less than 100 chat interactions, over 2-3 hours. (Remember the rest of the time was spent installing and figuring out how to get it to work.)
I looked around for how to increase my quota.
Did not find anything simple. I do not even see the option to make a request to Google.
Not sure when the TPM quota resets, but the rate limits stopped me dead in my tracks.
I was just starting to think through the various things I could do with this new co-worker.
I'm now searching for solutions to this problem...
Science is haunted. In early 2024, one major publisher retracted hundreds of scientific papers. Most were not the work of hurried researchers, but of ghosts—digital phantoms generated by artificial intelligence. Featuring nonsensical diagrams and fabricated data, they had sailed through the gates of peer review.
This spectre of AI-driven fraud is not only a new technological threat. It is also a symptom of a pre-existing disease.
These failures are not just academic embarrassments. In fields like global health, where knowledge means the difference between life and death, we can no longer afford to ignore them. Indeed, the crisis in scientific journals is not, at its heart, a crisis in publishing. It is a crisis of knowledge—of what we value, who we trust, and how we come to know. That makes it a crisis of education.
Bill Gates’ latest public memo marks a significant shift in how the world’s most influential philanthropist frames the challenge of climate change. He sees a future in which responding to climate threats and promoting well-being become two sides of the same mission, declaring, “development is adaptation.” This position resonates with a core message that has emerged across global health over the past several years: climate change is about health.
The 2025 report confirms that climate change’s assault on human health has reached alarming new levels:
Thirteen of 20 indicators tracking health threats are flashing red at record highs.
Heat-related mortality, now estimated at 546,000 deaths annually in the 2012-21 period, has climbed 63% since the 1990s.
Deaths linked to wildfire smoke pollution hit a new peak in 2024, while fossil fuel combustion overall remained responsible for 2.52 million deaths in 2022 alone.
Here is everything that the new Lancet Countdown says about the value and significance of indigenous and other forms of local knowledge, as well as their value for community-led action to respond to the impacts of climate change on health.
The Empowering Learners AI 2025 global conference (7-10 October 2025) was a fascinating location to observe how academics – albeit mostly white men from the Global North centers that concentrate resources for research – are navigating these troubled waters.
The impacts of AI in education matter because, as the OECD's Stefan Vincent-Lancrin explained: "performance in education is the learning, whereas in many other businesses, the performance is performing the task that you're supposed to do."
The problem is not that AI will do our work for us.
The problem is that in doing so, it may cause us to forget how to think.
This is a critical moment for work on gender in emergencies.
Across the humanitarian sector, we are witnessing a coordinated backlash.
Decades of progress are threatened by targeted funding cuts, the erasure of essential research and tools, and a political climate that seeks to silence our work.
Many dedicated practitioners feel isolated and that their work is being devalued.
This is not a time for silence.
It is a time for solidarity and for finding resilient ways to sustain our practice.
In this spirit, The Geneva Learning Foundation is pleased to announce the new Certificate peer learning programme for gender in emergencies.
TL;DR Took a while to install and configure, even longer to connect to Telegram. Then ran out of TPM quota from Google Cloud's Gemini API after <100 chat interactions. Had problems and potential security risk due to hallucinations.
I set up a "claw" user account (not admin) on my MacBook Air to isolate it. Looked for and implemented every security measure I could find. Openclaw is open and it has major security risks.
Perplexity was my assistant to give me step-by-step instructions.
I asked it to sum up my very limited dev competencies: Reda is an experienced systems administrator transitioning to modern cloud-native and Node.js-based tooling. Provide explicit, copy-paste-ready commands with brief inline comments explaining why each step matters. Include common failure modes and their symptoms. Assume strong Unix fundamentals but explain JavaScript/Node ecosystem conventions (like where configs live, how env vars override files, and when to use `sudo` vs user-local installs). Always show verification commands after each major step. Security-first approach is essential
This is very generous. Last time I coded anything was in C++. That was in 1990. And I constantly have to look up basic Unix stuff.
The biggest problem is that even Perplexity gets confused by recent but now outdated info about install, configuration, etc. This is probably due to the velocity of development for Openclaw.
For example, its first recommendation was to use npm to install. Turns out the curl command that sits on the http://openclaw.ai home page is much better.
The next problem was how to connect Telegram and Slack. After much wrangling, my claw answered on Telegram.
That was the best moment.
Within a couple of hours, I got two take-aways: 1) It really is transformative 2) It hallucinates and fails at basic stuff. 3) When it fails, in a few instances it found a fix and seemed to successfully implement it.
For example, when I explored how to allow it to view files in my Dropbox and upload its outputs, it claimed it was going to install "dropbook", a plugin that does not exist.
That was scary, because it then said it downloaded the repo for dropbook and installed it, then claimed it had removed it. When asked, it gave me a link to Moltbot plugin site that, of course, has nothing called dropbook.
Then I hit the TPM wall.
My Google Cloud is Tier 1, which just means I have a credit card linked to my account.
My rudimentary understanding is that "TPM" is a measure of how many tokens you have used. My quota is 2 million.
Yes claw used that up in less than 100 chat interactions, over 2-3 hours. (Remember the rest of the time was spent installing and figuring out how to get it to work.)
I looked around for how to increase my quota.
Did not find anything simple. I do not even see the option to make a request to Google.
Not sure when the TPM quota resets, but the rate limits stopped me dead in my tracks.
I was just starting to think through the various things I could do with this new co-worker.
I'm now searching for solutions to this problem...