Hacker Newsnew | past | comments | ask | show | jobs | submit | verdverm's commentslogin

just put the markdown next to the code and in git, humans should read something like this before letting a "new developer decide to migrate the billing system database" (this sounds completely fabricated by the way, and if real, you have problems most people don't)

why do I want all this extra stuff?

with markdown in the repo and agents available everywhere... what makes this approach better? (ps, the practice of coding has fundamentally changed forever, we are at the beginning of a paradigm shift, the 4th wave for any Toffler fans)


Thanks for the pushback — genuinely fair.

This is still just markdown in the repo. The Action doesn’t replace ADRs, it just surfaces the relevant ones automatically in PRs so reviewers don’t have to remember to look for them.

In teams where people consistently check ADRs, this probably isn’t useful.

In teams where the ADR exists but nobody remembers it during review, this helps reduce that friction.

And yeah — the Mongo example was dramatized. The real version was just re-explaining a past decision in a design doc. Not catastrophic, just wasted cycles.

Appreciate the sanity check.


it's crazy that Google is spending something like 4x this in a year just for capex

wonder how much of that $30B will make it their way and pay that down


has me wondering if Anthropic is one of those confidential TPU buyers

likely because Google has stake in Anthropic

WaPo should point something like this at themselves before they point it at anyone else, with an uno reverse card

While I don't find your deflection and off-topic whataboutism useful, it does remind me of the time West Virginia was thinking of replacing its motto "Almost Heaven."

Wapo held a contest to come up with a new slogan and the winner was "Almost Haiti."


I've heard it posited that the reason the frontier companies are frontier is because they have custom data and evals. This is what I would do too

It's a giant game of leapfrog, shift or stretch time out a bit and they all look equivalent

Here's a good thread over 1+ month, as each model comes out

https://bsky.app/profile/pekka.bsky.social/post/3meokmizvt22...

tl;dr - Pekka says Arc-AGI-2 is now toast as a benchmark


If you look at the problem space it is easy to see why it's toast, maybe there's intelligence in there, but hardly general.

the best way I've seen this describes is "spikey" intelligence, really good at some points, those make the spikes

humans are the same way, we all have a unique spike pattern, interests and talents

ai are effectively the same spikes across instances, if simplified. I could argue self driving vs chatbots vs world models vs game playing might constitute enough variation. I would not say the same of Gemini vs Claude vs ... (instances), that's where I see "spikey clones"


You can get more spiky with AIs, whereas with human brain we are more hard wired.

So maybe we are forced to be more balanced and general whereas AI don't have to.


I suspect the non-spikey part is the more interesting comparison

Why is it so easy for me to open the car door, get in, close the door, buckle up. You can do this in the dark and without looking.

There are an infinite number of little things like this you think zero about, take near zero energy, yet which are extremely hard for Ai


>Why is it so easy for me to open the car door

Because this part of your brain has been optimized for hundreds of millions of years. It's been around a long ass time and takes an amazingly low amount of energy to do these things.

On the other hand the 'thinking' part of your brain, that is your higher intelligence is very new to evolution. It's expensive to run. It's problematic when giving birth. It's really slow with things like numbers, heck a tiny calculator and whip your butt in adding.

There's a term for this, but I can't think of it at the moment.


You are asking a robotics question, not an AI question. Robotics is more and less than AI. Boston Dynamics robots are getting quite near your benchmark.

Boston dynamics is missing just about all the degrees of freedom involved in the scenario op mentions.

> maybe there's intelligence in there, but hardly general.

Of course. Just as our human intelligence isn't general.


@dang will often replace the post url & merge comments

HN guidelines prefer the original source over social posts linking to it.


why can't they use the same one I do, and already pay for?

I'm certainly not going to pay for two versions of the same thing


Totally fair question! SaySigned isn't meant to replace your existing e-signature tool for human signing workflows. It's built for a different use case entirely: AI agents signing contracts with other AI agents, with no human in the loop.

The primary interface is actually MCP (Model Context Protocol) — so AI agents like Claude, GPT, etc. can natively discover and use signing capabilities as tools.

Think of it less like "DocuSign but again" and more like "what happens when your AI procurement agent needs to execute an NDA with a vendor's AI sales agent at 2am." No browser, no email-a-link-to-sign. The audit trail and legal attribution are designed around machine-to-machine workflows.

If you're signing things yourself, your current tool is the right one. SaySigned is for when your AI agents are doing the signing.


1. It's not clear this is even legal in the first place

2. Who's reviewing the contract at 2am to make sure it's not going to cause the company harm?

3. When will existing providers like DocuSign or Google Workspace have both options for me? Generally, why two tools that do one thing instead of one tool that does both things?

4. Why not use computer-use in the meantime?


Great questions — these are exactly the right things to push on.

1. It's actually clearer than you'd think. UETA Section 14 (from 1999, adopted in 49 states) explicitly says: a contract can be formed by the interaction of electronic agents even if no individual was aware of or reviewed the actions. The ESIGN Act says the same — electronic signatures are valid as long as they're "legally attributable to the person to be bound." The legal principle isn't new: the human or org who authorizes the agent is liable, just like traditional agency law. We maintain a full attribution chain — principal → agent registration → scoped authorization → action → cryptographic audit trail — so there's always an unbroken link back to who's responsible.

2. This is a product design question, not a platform limitation. SaySigned doesn't remove human oversight — it gives you the infrastructure to define exactly how much oversight you want. Humans create the contract templates upfront — the exact terms, clauses, and conditions they want. The agent doesn't improvise contracts, it executes from pre-approved templates the human defined. On top of that, you can scope an agent's authority to only sign contracts under $X, only with pre-approved counterparties, only for specific document types. The 2am scenario isn't "no one is watching" — it's "the human wrote the contract template, set the guardrails at 2pm, and the agent operates within them." Same way you'd give a procurement team a spending policy and approved vendor list rather than approving every PO yourself.

3. DocuSign and Google Workspace are built around a human-in-the-loop UX: email a link, open in browser, click to sign. Their entire architecture assumes a human at a screen. MCP (Model Context Protocol) is a fundamentally different interface — agents discover signing capabilities as native tools, no browser automation needed. And here's the thing about Google Workspace specifically: if you're using Gemini inside Workspace, Gemini could connect to SaySigned's MCP server and use it as a signing tool directly. We're not competing with Google — we're the signing layer their AI agent would use. Could DocuSign build this? Sure, eventually. But retrofitting agent-native workflows onto a human-first architecture is a very different challenge than building for it from day one. Same reason Figma didn't come from Adobe.

4. Computer-use (browser automation) is the worst of both worlds for this. It's slow, brittle (UI changes break it), expensive per transaction, and gives you zero cryptographic proof of what happened. If your agent clicks through DocuSign via browser automation, you get a screenshot-level audit trail at best. With MCP, you get a PAdES-signed PDF with an embedded RFC 3161 timestamp, a cryptographic hash chain in the audit log, and a green checkmark in Adobe Reader. One is a hack, the other is infrastructure.


I use ADK and don't have this problem, I can switch sessions in my coding agent and they keep running in the background. If a new event comes in, and I have that session open, then the UI updates

This isn't really a problem, just bad design on the apps part.

tl;dr - don't intertwine your agent engine with UI rendering, easy enough?


Interactive is hard

1. People prefer reading and watching

2. Getting interactions to work well across platforms is going to have your agents endlessly spinning their wheels

Cold emails are not marketing, that is more often considered sales, or even more so spam. Waitlists are largely meaningless. Don't try to talk to investors until you have users using the platform

Your best strategy is to work on a problem you have in a domain you already deeply understand. If your picking be cause it looks like you can make money, you will fail.

Focus on problems, not solutions, which need to be flexible


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: