To me, the current AI boom is more like when McDonald's became available in my neck of the woods after '89. Amazing at first, but then you realize it's mostly sloppy grease that has its uses.
The wild technology race of the 90s, on the other hand, felt like a magical new dimension opening up. Maybe just because it took much longer to get thoroughly turned into a vector for BS.
we have a "gambit_init" tool call that is synthetically injected into every call which has the context. Because it's the result of a tool call, it gets injected into layer 6 of the chain of command, so it's less likely to be subject to prompt injections.
Also, relatedly, yes i have thought EXTREMELY deeply about cryptographic primitives to replace HTTP with peer-to-peer webs of trust as the primary units of compute and information.
Imagine being able to authenticate the source of an image using "private blockchains" ala holepunch's hypercore.
Injecting context via tool outputs to hit Layer 6 is a clever way to leverage the model spec.
The gap I keep coming back to is that even at Layer 6, enforcement is probabilistic. You are still negotiating with the model's weights. "Less likely to fail" is great for reliability, but hard to sell on a security questionnaire.
Tenuo operates at the execution boundary. It checks after the model decides and before the tool runs. Even if the model gets tricked (or just hallucinates), the action fails if the cryptographic warrant doesn't allow that specific action.
Re: Hypercore/P2P, I actually see that as the identity layer we're missing. You need a decentralized root of trust (Provenance) to verify who signed the Warrant (Authorization). Tenuo handles the latter, but it needs something like Hypercore for the former.
Would be curious to see how Gambit's Deck pattern could integrate with warrant-based authorization. Since you already have typed inputs/outputs, mapping those to signed capabilities seems like a natural fit.
reply