Hacker Newsnew | past | comments | ask | show | jobs | submit | randall's commentslogin

same. this is about losing a negotiation and saving face / exacting revenge.

i don’t get it. what’s his motive in your view? he literally has no shares in openai.

Maybe he just wants power? Some people are just evil and don't need greed to fuel their evil ways.

even marvel villain thanos was given a plausible motivation.

i don’t think if you actually knew him as a person you’d think this was a reasonable perception.


I wanted the AOL one to say "Welcome" before "you've got mail!" lol

The last 3 years of LLM progress, to me, feel like 1994-1998.


I don't know. The novelty of LLMs faded very quickly for me.


I think they're scaling down in novelty quicker than they're scaling up in capability.

Still easy to be surprised if you stop paying attention for a while though.


Agree! I've been computering for 50 years and for me the significant milestones have been:

- rdbms

- PC

- Internet and email

- SaaS

- Mobile

- social media

- LLMs

I doubt that LLMs will be anything like as significant to our futures as social media though. And not in an entirely good way.


Except this time there isn’t a strong hacker counter culture.

Where are the greybeards in their flip flops? Where are the teen prodigies?

Everyone is sucking corporate dick, myself included


To me, the current AI boom is more like when McDonald's became available in my neck of the woods after '89. Amazing at first, but then you realize it's mostly sloppy grease that has its uses.

The wild technology race of the 90s, on the other hand, felt like a magical new dimension opening up. Maybe just because it took much longer to get thoroughly turned into a vector for BS.


this is like irl cryptographic signatures for content lol


thanks for the PR! :)


once your team comes to a consensus on what PII is, you can roughly guarantee it... especially as models improve.


So two things.

1/ crypto signing is totally the right way to think about this. 2/ I'm limiting prompt injection by using chain of command: https://model-spec.openai.com/2025-12-18.html#chain_of_comma...

we have a "gambit_init" tool call that is synthetically injected into every call which has the context. Because it's the result of a tool call, it gets injected into layer 6 of the chain of command, so it's less likely to be subject to prompt injections.

Also, relatedly, yes i have thought EXTREMELY deeply about cryptographic primitives to replace HTTP with peer-to-peer webs of trust as the primary units of compute and information.

Imagine being able to authenticate the source of an image using "private blockchains" ala holepunch's hypercore.


Injecting context via tool outputs to hit Layer 6 is a clever way to leverage the model spec.

The gap I keep coming back to is that even at Layer 6, enforcement is probabilistic. You are still negotiating with the model's weights. "Less likely to fail" is great for reliability, but hard to sell on a security questionnaire.

Tenuo operates at the execution boundary. It checks after the model decides and before the tool runs. Even if the model gets tricked (or just hallucinates), the action fails if the cryptographic warrant doesn't allow that specific action.

Re: Hypercore/P2P, I actually see that as the identity layer we're missing. You need a decentralized root of trust (Provenance) to verify who signed the Warrant (Authorization). Tenuo handles the latter, but it needs something like Hypercore for the former.

Would be curious to see how Gambit's Deck pattern could integrate with warrant-based authorization. Since you already have typed inputs/outputs, mapping those to signed capabilities seems like a natural fit.


yaaaaa exactly. You're totally on the same wavelength as me. Let's be friends lol


i have a huge theory here that idk when we’ll implement but it has to do with “quorums” and other stuff.

hard to explain… we’ll keep going.


right now yeah we’re just dropping context… sub agents are short lived.

thinking about ways to deal with that but we haven’t yet done it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: