Hacker Newsnew | past | comments | ask | show | jobs | submit | ipsin's commentslogin

I wonder about the world where, instead of investing in AI, everyone invested in API.

Like, surfacing APIs, fostering interoperability... I don't want an AI agent, but I might be interested in an agent operating with fixed rules, and with a limited set of capabilities.

Instead we're trying to train systems to move a mouse in a browser and praying it doesn't accidentally send 60 pairs of shoes to a random address in Topeka.


LLMs offer the single biggest advance in interoperability I've ever seen.

We don't need to figure out the one true perfect design for standardized APIs for a given domain any more.

Instead, we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.


The problem with LLMs as interoperability is they only work sub 100% of the time. Yes they help but the point of the article is what if we spent 100billion on APIs? We absolutely could build something way more interoperable and that’s 100% accurate.

I think about code generation in this space a lot because I’ve been writing Gleam. The LSP code actions are incredible. There’s no “oh sorry I meant to do it the other way” you get with LLMs because everything is strongly typed. What if we spent 100billion on a programming language?

We’ve now spent many hundreds of billions on tools which are powerful but we’ve also chosen to ignore many other ways to spend that money.


If you gave me $100 billion to spend on API interoperability, knowing what I know today, I would spend that money inventing LLMs.


For $100 billion you could get public standards for APIs of all kinds implemented. I don’t think people understand just how much money that is. We’re talking solve extreme hunger and create international api standards afterwards money.


Having lurked around in the edges of various standardization processes for 20+ years I don't think this is a problem that gets fixed by money.

You can spend an enormous amount of money building out a standard like SOAP which might then turn out not to have nearly as much long-running as the specification authors expected.


That’s totally fair. Money though is a representation of desire and the reality is people don’t have interest in solving these problems. We live in a society where there’s much more interest in creating something that might be god than solving other problems. And that’s really the main point of the article.

But also even if the W3C spent $10m a year for the 10 years SOAP was being actively developed according to Wikipedia that would still be 1/1000 of the 100billion we’re talking about. So we really have no idea what this sort of money could do if mobilized in other ways.


Yeah. Much of that money is going to physically building data centers, in the middle of an affordable housing crisis. "Look, I just need a few billion, to build a server farm, to build the machine god, who will tell us how to solve the homelessness and housing insecurity." If it works? That'd be neat. Right now, it sounds like crackhead logic.


Today I’ve compiled a few thousand classes of Javadocs in .978 second. I was so impressed, with a build over 2 minutes, each byte of code we write takes a second to execute, computing is actually lightening fast, just now when it’s awfully written.

Time of executing bytecode << REST APIs << launching a full JVM for each file you want to compile << launching an LLM to call an API (each << is above x10).


The point is that you call the LLM to generate the code that lets you talk to the API, rather than writing that glue code yourself. Not that you call the LLM to talk to that API every time.


Exactly.


> LLMs offer the single biggest advance in interoperability I've ever seen.

> ... we need to build APIs with just enough documentation (and/or one or two illustrative examples) that an LLM can help spit out the glue code needed to hook them together.

If a developer relies on client code generated by an LLM to use an API, how would they know if what was generated is a proper use of said API? Also, what about when lesser used API functionality should be used instead of more often used ones for a given use-case?

If the answer is "unit/integration tests certify the production code", then how would those be made if the developer is reliant upon LLM for code generation? By having an LLM generate the test suite?

And if the answer is "developers need to write tests themselves to verify the LLM generated code", then that implies the developer understands what correct and incorrect API usage is beforehand.

Which begs the question; why bother using an LLM to "spit out the glue code" other than as a way to save some keystrokes which have to be understood anyway?


The developer has the LLM write the test suite, then the developer reviews those tests.

This pattern works really well.

"other than as a way to save some keystrokes which have to be understood anyway?"

It's exactly that. You can save so many keystrokes this way.


a test suite is a way to stochastically reduce certain classes of risk

it doesn't assert, or imply, correctness in the sense that is used here


As if the challenges in writing software are how to hook APIs together.

I get that in the webdev space, that is true to a much larger degree than has been true in the past. But it's still not really the central problem there, and is almost peripheral when it comes to desktop/native/embedded.


Agree. I often prefer to screen scrape even when an API is available because the API might contain limited data or other restrictions (e.g. authentication) that web pages do not. If you don't depend on an API, you'll never be reliant on an API.


Basically the opposite has happened. Not only has every API either been removed or restricted. Every company is investing a lot of resources in making their platforms impossible to automate even with browser automation tools.

Mix of open platforms facing immense abuse from bad actors, and companies realising their platform has more value closed. Reddit for example doesn't want you scraping their site to train AIs when they could sell you that data. And they certainly don't want bots spamming up the platform when they could sell you ad space.


We work with American health insurance companies and their portals are the only API you’re going to get. They have negative incentive to build a true API.

LLMs are 10x better than the existing state of the art (scraping with hardcodes selectors). LLMs making voice calls are at least that compared to the existing state of the art (humans sitting on hold.)

The beauty of LLMs is that they can (can! not perfectly!) turn something without an API into one.

I’m 100% with you that an API would be better. But they’re not going to make one.


I feel like it’s not technically difficult to achieve this outcome… but the incentives just aren’t there to make this interoperable dream a reality.

Like, we already had a perfectly reasonable decentralized protocol with the internet itself. But ultimately businesses with a profit motive made it such that the internet became a handful of giant silos, none of which play nice with each other.


the appeal of investors to AI is anti API/access.


I was hoping to see the questions (which I can probably find online), but also the answers from models and the judge's scores! Am I missing a link? Without that I can't tell whether I should be impressed or not.


https://matharena.ai/

On their website you can see the full answers LLM gave ("click cells to see...")


Thanks, I was annoyed that the article didn't cite the actual law in question, but the BBC comes in with "Port of London Thames Byelaws, clause 36.2"

https://www.bbc.com/news/articles/cmlrx89jdv2o


The BBC also didn't call it "ancient," which would be questionable considering that the law is from 2012.


Its an ancient practise, codified into law in 2012 when the regulatory framework was re-codified from multiple laws like Port of London Act 1908 as well as time immemorial acts like this.


According to the article the original practice is medieval, not ancient. It's colloquial usage of "ancient" as in "my car is ancient" is a bit odd.


Fun fact: in English law "time immemorial" has a very specific meaning: it means "any time before 1189". See https://en.wikipedia.org/wiki/Time_immemorial for more.


Can we agree that the language of sexual violence is inappropriate in this context?


Prompt: Share your prompt that stumps every AI model here.




I appreciate that you can make your nails fancy on a budget (e.g. using nail stamping).

I enjoy having trimmed nails as well, because having very long nails can make certain tasks difficult, so all these designs make me cringe a bit.

But... if you ever feel curious, explore DIY nails!


Looking at the replies to, say, https://x.com/aoc, it seems pretty steered to me.

If you want to hear replies from her supporters, good luck getting through a few hundred blue check marks. Compare this to something like https://x.com/RonDeSantis, which is an full of adulation for the guy.

In my estimation, X is tilted pretty far right at this point, simply because paid blue check marks are a sign of pride or shame, based on your political affiliation.


That's what I find most offensive about the use of LLMs in education: it can readily produce something in the shape of a logical argument, without actually being correct.

I'm worried that a generation might learn that that's good enough.


a generation of consultants is already doing that- look at the ruckus around PWC etc in Australia. Hell, look at the folks supposedly doing diligence on Enron. This is not new. People lie, fib and prevaricate. The fact the machines trained on our actions do the same thing should not come as a shock. If anything it strikes me as the uncanny valley of truthiness.


To me, this is the same as the "loyalty cards" that provide "discounts" in U.S. grocery chains like Kroger's & Ralph's. They've already decided to take your money with higher prices, and dole out small discounts for people who want to play their games.

As long as I have a choice, I will avoid companies that play such games.


According to the story of JC Penney, people like their discounts. When the company tried a "fair and square" pricing strategy, it was a huge failure and they got back to the usual way.


Well, the customers didn't like it, but the business continued to fail after changing back. Stock price declined, stores were closed, and eventually the company declared bankruptcy.

It may have been one of those "customers wanted the faster horse" situations where the business tried to build a faster horse.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: