They are not. It's the eastern imperialists that are causing this. And if it's a choice between eastern imperialists (China and Russia) and western ones, it seems that Iranian people by far prefer the western ones.
For three main reasons.
1. Culturally Iranians are way more aligned with west.
2. Western imperialism results in more democracy. Not 100%, but not this bad.
3. Economically countries under west's influence do much better. Iran is extremely poor right now.
Also, not just followers. There’s a kinda “merchant” behaviour too I think … signalling and trading in hype perspectives.
But to be fair, I’m not sure what the average dev/eng is supposed to do against a climate of regular change, many disparate opinionated groups with disparate tech stacks, and, IMO a pretty ~~pure~~ poor engineering culture of actually weighing the value of tech/methods against relevant constraints and trade offs.
Yeah I described trends in software development is like the length of skirts. They both have the same logic behind the changes. But I don't consider type systems to be hype. I think they're frequently poorly implemented with a mathematically illiterate notation but they're so damn useful went done reasonably right
Most of my understanding on type systems comes from taking a course on the calculation of programs from the author of this book.
To be blunt, this course and the understanding this book gave me crystallized why I was unhappy with the current state of software development and it was one more nudge pushing me out of the field. I caution others that reading and understanding this book may change your understanding of the software development world enough that you don't want to be part of it either.
Programming in the 1990s: An Introduction to the Calculation of Programs | Springer Nature Link (formerly SpringerLink) https://share.google/K81ZlVTbfoR2oeYLh
Type systems are orthogonal topic. I’d argue that the biggest hypers of AI are in the static types camp, because it allows them to iterate quickly and more safely than using dynamic types.
I dont think people realize how important this is.
If one of the vendors manages to get their protocol to become the target platform (eg oai and app sdk), that is essentially their vendor lock in to become the next iOS/Android.
Private API’s or EEE strategies are gonna be something to keep an eye for and i wish regulators would step in to prevent them before its too late.
How is it any better if instead of _one_ vendor, _two_ vendors push an immature version of a standards extension that mainly caters to their needs and give it the official stamp of approval under the MCP umbrella?
Having a chatbot that drives websites inside of it is such an attempted monopolist play. Having a system agent that can interact with apps via API without being connected to the app is the pattern that's both elegant and preserves freedom.
I don't see how this is a step down from existing web applications. Should companies building web applications not be opinionated about their user interfaces? When I look at Notion, I should just get any view of the data inside it, regardless of whether it's the same view as my coworker gets? How is this preferable?
> Having a system agent that can interact with apps via API without being connected to the app is the pattern that's both elegant and preserves freedom
MCP was created so llm companies can have a plugin system. So instead of them being the API provider, they can become the platform that we build apps/plugins for, and they become the user interface to end consumers.
MCP defines the API so vendors of LLM tools like cursor, claude code, codex etc don't all make their own bespoke, custom ways to call tools.
The main issue is the disagreement on how to declare the MCP tool exists. Cursor, vscode, claude all use basically the same mcp.json file, but then codex uses `config.toml`. There's very little uniformity in project-specific MCP tools as well, they tend to be defined globally.
>but isn't this solved by publishing good API docs, and then pointing the LLM to those docs as a training resource?
Yes.
It's not a dumb question. The situation is so dumb you feel like an idiot for asking the obvious question. But it's the right question to ask.
Also you don't need to "train" the LLM on those resources. All major models have function / tool calling built in. Either create your own readme.txt with extra context or, if it's possible, update the API's with more "descriptive" metadata (aka something like swagger) to help the LLM understand how to use the API.
You keep saying that major models have "tool calling built in". And that by giving them context about available APIs, the LLM can "use the API".
But you don't explain, in any of your comments, precisely how an LLM in practice is able to itself invoke an API function. Could you explain how?
A model is typically distributed as a set of parameters, interpreted by an inference framework (such as llama.cpp), and not as a standalone application that understands how to invoke external functions.
So I am very keen to understand how these "major models" would invoke a function in the absence of a chassis container application (like Claude Code, that tells the model, via a prompt prefix, what tokens the model should emit to trigger a function, and which on detection of those tokens invokes the function on the model's behalf - which is not at all the same thing as the model invoking the function itself).
Just a high level explanation of how you are saying it works would be most illuminating.
The LLM output differentiates between text output intended for the user to see, vs tool usage.
You might be thinking "but I've never seen any sort of metadata in textual output from LLMs, so how does the client/agent know?"
To which I will ask: when you loaded this page in your browser, did you see any HTML tags, CSS, etc? No. But that's only because your browser read the HTML rendered the page, hiding the markup from you.
Similarly, what the LLM generates looks quite different compared to what you'll see in typical, interactive usage.
The schema is enforced much like end-user visible structured outputs work -- if you're not familiar, many services will let you constrain the output to validate against a given schema. See for example:
It is. Anthropic builds stuff like MCP and skills to try and lock people into their ecosystem. I'm sure they were surprised when MCP totally took off (I know I was).
Rechat | API Engineer, Front End Engineer | Remote
Rechat is the leading AI-powered operating platform for real estate brokers and agents. Think Shopify for real estate—a mobile-first super app with marketing automation, workflow management, and everything agents and brokers need to run their business in one place. We’re a small, sharp team building a big, sophisticated product without the usual corporate nonsense.
Why You’ll Like It Here:
Remote-first with flexible hours
Minimal meetings—we focus on real work
No tech hype-chasing—we use the right tools, not the latest trends
Flat team structure—no layers of bureaucracy
Who We’re Looking For:
Senior Frontend Engineer (TypeScript, React) – Own the UI and craft intuitive experiences
Senior Backend Engineer (Node.js, PostgreSQL) – Build and refine our robust, high-performance backend
If you thrive in small teams solving big technical challenges, reach out to emil+hn@rechat.com.
I hate myself for saying this, but HN should consider closing new registrations for a while until we figure out what to do with this.
reply