Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"The rise of MCP gives hope that the popularity of AI amongst coders might pry open all these other platforms to make them programmable for any purpose, not just so that LLMs can control them."

I think the opposite, MCP is destined to fail for the exact same reason the semantic web failed, nobody makes money when things aren't locked down.

It makes me wonder how much functionality of things like AI searching the web for us (sorry, doing "deep-research") might have been solved in better ways. We could have had restaurants publish their menus in a metadata format and anyone could write a python script to say find the cheapest tacos in Texas, but no, the left hand locks down data behind artificial barriers and then the right hand builds AI (datacenters and all) to get around it. On a macro level its just plain stupid.



> I think the opposite, MCP is destined to fail for the exact same reason the semantic web failed, nobody makes money when things aren't locked down.

I think this is right. MCP resembles robots.txt evolved into some higher lifeform, but it's still very much "describe your resources for us to exploit them".

The reason the previous agent wave died (it was a Java thing in the 90s) was eventually everyone realized they couldn't trust their code once it was running on a machine it's supposed to be negotiating with. Fundamentally there is an information assymetry problem between interacting agents, entirely by design. Take that away and huge swathes of society will stop functioning.


"describe your resources for us to exploit them"

What you want to do is offer resources that make you money when they're "exploited".


I would agree with that if there were no distinction between clients and servers. i.e. agents and LLMs are resources that should be discovered and exploited in the same exact way as anything else, and switchable in the same ways.

The whole thing reminds me of stuff like Alan Kay's legendary OOPSLA talk in 1997 ( https://www.youtube.com/watch?v=oKg1hTOQXoY ) "Every object should have an IP" (Also "Arrogance in computer science is measured in nano Dijkstras").


i think the problem is in business process, it´s created for no work automatic, the people need be on control OR need have block because well if all people try cancel your subscription and can go in one step with a simple prompt, this is a huge revenue loss.

look how all companies have super system for crm/sales but when you go to backoffice all run in sheets and sometimes in real paper.


It's not just that nobody makes money providing a free and open API. It's that to operate such an API you'll basically need unlimited resources. No matter how many resources you throw at the problem, somebody will still figure out a way of exhausting those resources for marginal gains. MCP will just make the problem worse as AI agents descend on any open MCP servers like locusts.

The only stable option, I think, is going to be pay-per-call RPC pricing. It's at least more viable to do then it was for Web 2.0 APIs, since at least the entity operating the model / agent will act as a clearinghouse for all the payments. (And I guess their most likely billing model is to fold these costs into their subscription plans? That seems like the best way to align incentives.)


This is correct imo.

...and just like no one is prepared to pay 500 different vendors for micro-transactions, no one is prepared to pay 500 different websites for their MCP services.

Much much more likely is that a few big players (like google and AWS) will have paid-tier 'mega-MCP' servers, that offer 90% of what people need and fit in with existing payment/auth solutions (like your AWS account) and...

...everyone else with an MCP server will be left out in the cold.

That's what's going to happen.

These MCP servers will be cute for a very very short amount of time until people try to monetize them, and then they will consolidate very very quickly into a much smaller set of servers that people are already paying. Mostly cloud providers.


HATEOAS was the dream in the early 2010s and that basically went nowhere beyond generating swagger yaml, despite the fact it intended to make API consumption trivial.

Whoever coined it as HATEOAS basically set it up to fail though.


> Whoever coined it as HATEOAS basically set it up to fail though.

I could never understand making the term "hate" so prominent.


> HATEOAS was the dream in the early 2010s and that basically went nowhere

I dunno, HTTP/1.1, the motivating use case for REST and HATEOAS, seems to have been moderately successful.


MCP is just that again, but less well thought out. Everything new is old.


I think MCP’s popularity is a side effect of the hype bubble driving AI atm - one of the fancy things one can do with AI.

If there was any “easy” value in making one’s data available in a standard form, we would’ve seen a lot more adoption of interoperable endpoints (e.g. using schema.org or generally common ontologies as opposed to custom formats that always need a special magic SDK).


There is an easy way to make your data available. It's existed for several hundred years, it's called plain text. We now have tools that allow computers to work with plain text. Outside of specific niches ontologies are vanity projects.


Plain human readable text is not an "artificial barrier". It the nature of our our world. Requiring that a restaurant publish menus in a metadata format is an artificial barrier. That the beauty of these new NLP tools. I don't need to have a restaurant owner learn JSON, or buy a software package that generates JSON. We can use data as it is. The cost of building useful tools goes to near zero. It will be imprecise, but that's what human language is.


Plain text menus would have been fine


How do you do things like compare prices in plain text?


You look at one price then look at a different one and then you make your comparison


> It the nature of our our world.

It's the nature of capitalism.

Some forms of capitalism may have roots in the natural world - natural selection as both a destructive & wasteful competitive process certainly has a lot of parallels in idealised markets - but there's nothing inherent about your menu example when it comes to the modern human world, beyond restrictions placed upon us by capitalism.

> Requiring that a restaurant publish menus in a metadata format is an artificial barrier

This is oddly phrased as noone would need to require anyone to do anything - it's obviously beneficial to a restaurant to publish their menus in formats that are as broadly usable as they can. The only barrier to them doing that is access to tools.

The various hurdles you're describing ("buying" software, the "cost" of building tools) are not natural phenomena.


MCP is described as a means to make the web open, but it’s actually it’s a means to make demos of neat things you could do if the web were actually open.


I still don't know who uses this semantic web. Like you have all these semantics marked up... for whom? What are actual applications using this?

Google has a small subset of schema.org it supports, but rather than "semantic web" it feels more like "here's my API." Its own schema tester often complains about things that should be valid schemas, simply because it doesn't conform to its API. How would any web developer mark up (and test the validity of said mark up) for applications that don't even exist?


xAI is a concrete example of this. During the initial LLM explosion, X locked down its previously public APIs and data sources. Simultaneously, xAI is investing massively in building its private data hoard and compute infrastructure. Probably a similar case with Meta.

"Data for me but not for thee"

MCP is only getting the light of day, arguably, because of LLM "one trick ponies" like OpenAI and Anthropic, who do benefit from MCP amplifying their value proposition. As that business model continues to fizzle out and lose/subordinate to the AI integrators (Google, Microsoft, xAI?), MCP will probably fizzle out as well.


> "Data for me but not for thee"

Exactly. Consumer tech has been locking down APIs and actively working against interop since the advertising business model became dominant. Web 2.0 is the most obvious example, but there are plenty more.

Look, you don’t even own your own contacts on social media sites. If you access Google from an atypical browser you will get fingerprinted, captchad and rejected. Anti-scraping, anti fraud, paywalls, even lawsuits because you’re using ”our” APIs (see oracle).

It’s not the tech, it’s the business model. It’s adversarial, it’s a Mexican standoff. Doesn’t matter how good the MCP spec is, it’s not gonna go anywhere with consumer apps. As long as the economy is based on ads, your B2C company needs a sidebar to show ads (or in rare cases, subscriptions). You’re not gonna make money from providing a free API.


Sure - not many companies made money on "HTTP", but lots of people/companies made gobs of money by adopting it.


I haven't paid close attention. Why can't people make money with MCP-based APIs? Why can't providers require API keys / payment to call their functions?


Sure they can - they're just another API interface tailored for LLMs. I think parent and OP are in fact ranting about that (many APIs being locked behind signups or paywalls). Not sure I agree with the criticism though. In my view, web 2.0 was a huge success: we went from a world with almost no APIs to one where nearly every major website or app offers one. That's real progress, even if we didn't turn every business into an open data non-profit.


MCP is basically APIs V2 as far as I can see. It probably will evolve in its concrete specs, but useful and not niche, especially when they can be composed fairly trivially.

In that sense, it is probably the building block for the next user interface, which is conversational.

Maybe the mention of web 2.0 is triggering all negative responses here, but on it's own, it is useful and could disrupt (not MCP itself, but the overall field) how we interact with software and systems


I'm cognizant of the silliness of mentioning this in comments lambasting the semantic web, but there's nothing stopping any restauranteur from doing that today, and the restaurant is mildly incentivized to do so because it allows folks who want to pay for tacos to be matched with those who have tacos to sell

I think a lot of WordPress plugins support schema.org too, so it may actually already be publishing the metadata, so long as it's not a damn pdf

- https://schema.org/Restaurant

- https://schema.org/Menu

- https://schema.org/MenuItem

- https://schema.org/Offer

The lockdown part is almost certainly Cloudflare gatekeeping which would straight up kill your python idea


The reason the semantic web failed is not only because "nobody makes money when things aren't locked down". It's also because nobody ain't got no time for generating infinite amount of metadata when full-text search and indexing, with a judicious pinch of fuzzy matching, is both faster and more reliable. And LLMs, as much as I dislike the technological/societal consequences of their existence, are effectively further development of the latter, so they won't go away.

Manual or even semi-automated cataloguing of websites (and further curating) of websites wasn't the answer to "how do I find stuff on the web" — Google was. Having standardized metadata format for menus is undoubtedly nice — but good luck making people to use it. You just can't. It really is both cheaper and easier for everyone involved to have website with arbitrary information layout scraped and fed into an LLM to extract relevant data: because what is "relevant" is different for everyone. You can't pre-ordain the full list of possible relevant metadata, and, again, good luck forcing everyone to fill out those 500 items-long forms. Ain't nobody got time for that.


I tend to agree one of the top semantic web problems was:

> It's also because nobody ain't got no time for generating infinite amount of metadata

There's also a lot of tooling problems too, that the semantic web doesn't integrate gracefully with POJO of the programming worlds.)

The tooling distance between users/devs and semantic web remains. But all that metadata? There being an interesting rich world of information, associated & well described & meticulous? Uh we actually seem like we just invented a really powerful tool at doing all this immense catalogization (LLM's).


>nobody makes money when things aren't locked down

i would rephrase as "incumbents don't usually make more money if things are opened up".

if consumer gets materially better value, then challenger ecosystem around MCP will evolve. it will be open at first - great for startups and challengers, innovator's dilemma for market leaders.

and then it will close as the new wave establishes their moats. but, similar to web, even though the current web leaders are now much more closed than we would like, the overall ecosystem is more open than it was.


> I think the opposite, MCP is destined to fail for the exact same reason the semantic web failed, nobody makes money when things aren't locked down.

Is there a way to handle "locking down" things with MCP? It seems like a potential opportunity for freemium services if they have a mechanism for authentication and telling the MCP caller "this user has access to these tools, but not _these_ yet".


Yes. MCP allows (and uses) exactly the same authentication mechanisms that any other rest or similar api allows. So if you have a service you want to expose (or not) via MCP you can do that in exactly the same way as you currently could do that for a rest API.

The difference for the user is instead of them having to make (or use) a special-purpose client to call your rest api, the llm (or llm powered application) can just call the api for them, meaning your rest service can be integrated into other llm-powered workflows.


Does it though? How do you provide a login or bearer token or whatever for the LLM to use?


The llm/or llm powered application is just running the mcp server from the file system so you can use your normal methods ie inject it into the environment, pass it somewhere as a parameter or whatever


> On a macro level its just plain stupid.

You've described most white-collar jobs :)


It's reasonable to be cynical, but the future hasn't been written yet. If we choose only to see a negative future, we will ensure that it can only exist.

In the negative vein, I see a lot of VCs and business leaders talking about making AI for companies that directly interface with customers.

Those agents will be used to manipulate and make painful existings services exactly like today. Enshitified transactional websites engineered for maximum pain.

A different direction can happen if we choose instead to use our ai agents to interact with business services. This is actually what's currently happening.

I use gemini/chatgpt to find things on the web for me without being manipulated and marketed at. Maybe one day soon, I can buy airline tickets without the minefield of dark patterns employed to maximize A/B tested revenue.

The only thing that needs to happen to keep us on this path is to bite the heels of the major companies with great agent systems that put the user at the center and not a brand. That means selling AI agents as a SaaS or open source projects - not ad-supported models.

This community is probably the group that has, collectively, the most say in where this future goes. Let's choose optimism.


The thing is, if AI agents become a significant part of web traffic then the content of the web will simply shift to manipulate the agent instead of the human.

And don’t forget when you use an AI agent today to buy something it’s using “marketing” information to make its decisions. It’s influenced by SEO in its search results, indeed there’s no shortage of marketers busy working out how to do this.

I do agree there’s much to be optimistic about but the fundamental dynamics of the consumer market won’t change.


It's absolutely true that in that future vision, the agents will then be marketed at.

And that's great.

In that world, those agents will sift through the noise. And the one that does that the best will win.

The end user experience then becomes uniform and pleasant.

It's the difference between talking to a personal secretary and a customer service representative. No one should have to endure the latter.


> In that world, those agents will sift through the noise. And the one that does that the best will win.

The existence of agents capable of learning to cut through the enshittification also implies the existence of agents capable of learning to enshittify all the more effectively. It's an arms race, and there's no reason to suspect that the pro-consumer side will win over the pro-exploitation side.


Oh SGML... what could have been




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: