Hacker Newsnew | past | comments | ask | show | jobs | submit | ramoz's commentslogin

> people were losing their minds about one in a million chance of complications caused by vaccination

Bit different. & under the context of vaccination being an aggressive, government-led, initiative to enforce a medical procedure on their body.


This will never hit a production enterprise system without some form of hooks/callbacks in place to instill governance.

Obviously much harder with UI vs agent events similar to the below.

https://docs.claude.com/en/docs/claude-code/hooks

https://google.github.io/adk-docs/callbacks/


>This will never hit a production enterprise system without some form of hooks/callbacks in place to instill governance.

knowing how many times Claude Code breezed through a hook call and threw it away after actually computing the hook for an answer and then proceeding to not integrate the hook results ; I think the concept of 'governance' is laughable.

LLMs are so much further from determinism/governance than people seem to realize.

I've even seen earlier CC breeze through a hook that ends with a halting test failure and "DO NOT PROCEED" verbage. The only hook that is guaranteed to work on call is a big theoretical dangerous claude-killing hook.


You can obviously hard code a hook


Hooks can be blocking so it's not clear what you mean.


Hi! I work in identity products at Browserbase. I’ve spent a fair amount of time lately thinking about how to layer RBAC across the web.

Do you think callbacks are how this gets done?


Disclaimer: Im a cofounder, we focus critical spaces with AI. Also i was the feature request for claude code hooks.

But my bet - we will not deploy a single agent into any real environment without deterministic guarantees. Hooks are a means...

Browserbase with hooks would be really powerful, governance beyond RBAC (but of course enabling relevant guardrailing as well - "does agent have permission to access this sharepoint right now, within this context, to conduct action x?").

I would love to meet with you actually, my shop cares intimately about agent verification and governance. Soon to release the tool I originally designed for claude code hooks.


Let’s chat my email is peyton at browserbase dot com


Deterministic guarantees, and corrective behavioral monitoring for ai agents (starting with claude code, and ADK). Think security + performance bumper rails. At the cost of 0 context.

I was the feature requestor for Claude Code Hooks - and have been involved in ai governance for quite awhile, this is an idea I'm excited about.

Ping below if you want to early beta test. everything is open source, no signups.


My CEO sent an ai generated blog today. I've never felt more frustrated reading something in my life. "x happened, here's what it means", "groundbreaking", "game-changer", "significant", "forefront of a technological shift"


I hope you learned an important lesson about reading the next email from the CEO.


At one company I worked at the executives sent out constant emails to everyone. It was part of the culture. After some layoffs HR leadership sent out a THREE PART email about how they were working on a very important project and how it took many many hours and meetings and so on.

The project was renaming the HR department ...

After that I sent all executive emails to a folder and did not read them, my mood improved drastically by not reading those emails.


I refuse to read anything that seems to be obviously AI generated. If they can't be bothered to write down what they think then I don't have any reason to bother with reading what they've posted either.


Why are you reading your CEO's blog?

This question applies whether it's written by an AI or not.


You misread. It's not the CEO's blog.


He drew a massive college crowd and was shot at that event. That's your answer.


We dont need gatekeepers. We do need to verify agents that act, in a reasonable way, on behalf of human vs an agent swarm/bot-mining operation (whether conducted by a large lab or a kid programming claude code to ddos his buddy's next.js deployment).


if you can manage the code part on your own, can you hit esc twice and revert to a previous context state using native capability in claude code.


We’ve not determined whether or not that isn’t a useful mechanism for capable intelligence.

For instance it is becoming clearer that you can build harnesses for a well-trained model and teach it how to use that harness in conjunction with powerful in-context learning. I’m explicitly speaking of the Claude models and the power of whatever it is they started doing in RL. Truly excited to see where they take things and the continued momentum with tools like Claude Code (a production harness).


The underlying model architectures have issues that perpetuate as models get better. We're still using transformers and RL.

  * Optimized for task completion, with limited attention resources for global alignment (RL/RLAIF reward loops/hacking)
  * These systems run outside of chat now.  file systems, CLIs, DBs, browsers → real-world side‑effects that you cannot train for. Hallucination becomes a problem of contradiction in the real world, and alignment is something an agent will struggle with as it's optimized to complete tasks. Which is why you see things like databases being dropped today. 
  * These are baked-in problems, not even considering the the adversarial nuances of things like prompt injection.
As AI advances, so do these issues.

Maybe it's cliche from an AI safety perspective. But I can never get over https://en.wikipedia.org/wiki/Instrumental_convergence as we see micro instances of it in our day-to-day with today's agents. Again, an issue that has existed from the dawn of these types of models. https://www.youtube.com/watch?v=s5qqjyGiBdc&t=1853s


I always appreciated Claude Code's commit authoring. Whereas I think a lot of people were offended that "their" work was being overshadowed by an AI's signature.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: