I asked it to check why the cron job kept failing, and it checked the cron payload and recommended reasons for the failure. I gave it the approval to go ahead and fix it. it tried different options (like trying different domains, and finally figured out the anti CF option).
the other tasks (like the MariaDB install and restore, python code refactoring) were a result of the initial requests made to Claw, like graphing my gmail email archives.
I'm not sure. Most of it is not even on the logs, it's followed up elsewhere.
You can try something like this on Gemini 3 Pro:
> Break down aspects of the economy by amenability to state control high/medium/low, based on what we see in successful economies. Include a rationale and supporting evidence/counterexamples. Present it in 3 tables.
It should give you dozens of things you can look up. It might mention successful Singapore and Vienna-style public housing. Some nice videos on that on Youtube.
Online discussions are usually at the level of "[Flagged] Communism bad".
I have the luxury of a few friends capable of discussing complex military, political, and social issues who are able to hold nuanced views backed by evidence.
Because of that good fortune, it hasn't occurred to me to use an LLM to organize information for these topics. I appreciate your sharing your approach and I look forward to trying this use case of LLMs.
Arguably because the parts the AI can't do (yet?) still need a lot of human attention. Stuff like developing business models, finding market fit, selling, interacting with prospects and customers, etc.
I wonder if this phenomenon comes from how reliable lower layers have become. For example, I never check the binary or ASM produced by my code, nor even intermediate byte code.
So vibers may be assuming the AI is as reliable, or at least can be with enough specs and attempts.
I have seen enough compiler (and even hardware) bugs to know that you do need to dig deeper to find out why something isn't working the way you thought it should. Of course I suspect there are many others who run into those bugs, then massage the code somehow and "fix" it that way.
Yeah, I know they exist in lower layers. Though layers being mostly deterministic (hardware glitches aside) I think they are relatively easy to rely on. Whereas LLMs seem to have an element of intentional randomness built into every prompt response.
Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.
Every company should have hired over the pandemic due to there being a higher EV than not hiring. It's like if someone offered an opportunity to pay $1000 for a 50% chance to make $8000, where the outcome is the same between everyone taking the offer. If you are maximizing for the long term everyone should take the offer even if it does result in a reality where everyone loses $1000.
reply