Hacker Newsnew | past | comments | ask | show | jobs | submit | _raz's commentslogin

Thanks, very nice! A very explanatory write-up


A RPC standard that plays nicely with LLMs?


I had to take a break from extending, our go LLM wrapper, https://github.com/modfin/bellman with mcp to write the blog entry. So some sort of server-like-thing will be added soon


Idk, I'm kind of agnostic and ended up throwing it in there.

Regurgitating the OAuth draft don't seem that usefull imho, and why am I forced into it if I'm using http. Seems like there are plenty of usecases where un-attended thing would like to interact over http, where we usually use other things aside from OAuth.

It all probably could have been replaced by

- The Client shall implement OAuth2 - The Server may implement OAuth2


For local servers this doesn't matter as much. For remote servers - you won't really have any serious MCP servers without auth, and you want to have some level setting done between client and servers. OAuth 2.1 is a good middle ground.

That's also where, with the new spec, you don't actually need to implement anything from scratch. Server issues a 401 with WWW-Authenticate, pointing to metadata for authorization server locations. Client takes that and does discovery, followed by OAuth flow (clients can use many libraries for that). You don't need to implement your own OAuth server.


Bearer tokens work elsewhere and imho are drastically simpler than oauth


But where would you get bearer tokens? How would you manage consent and scopes? What about revocation? OAuth is essentially the "engine" that gives you the bearer tokens you need for authorization.


Once I published the blog post, I ended up doing a similar thing the other day. https://github.com/modelcontextprotocol/modelcontextprotocol...

From reading your issue, I'm not holding my breath.

It all kind of seems too important to fuck up


In the grand scheme of things I think we are still very early. MCP might be the thing which is why I'd rather try and contribute if I can; it does have a grassroots movement I haven't seen in a while. But the wonderful thing about the market is that incentives, e.g. good customer experiences that people pay for, will probably win. This means that MCP, if it remains the focal point for this sort of work, will become a lot better regardless of whether or not early pokes and prods by folks like us are successful or not. :)


Kind of my fear exactly. We are moving so fast and that mcp would create an accept a transport protocol that might take years or decades to get rid off for something better.

Kind of reminds me of the browser wars during 90s where everyone tried to run the fastest an created splits in standards and browsers what we didn't really det rid of for a good 20 year or more. IE11 was around for far to long


I think that transport is a non-issue.

Whatever the transport evolves to, it is easy to create proxies that convert from one transport to another, e.g. https://github.com/punkpeye/mcp-proxy

As an example, every server that you see on Glama MCP registry today is hosted using stdio. However, the proxy makes them available over SSE, and could theoretically make them available over WS, 'streamable HTTP', etc

Glama is just one example of doing this, but I think that other registries/tools will emerge that will effectively make the transport the server chooses to implement irrelevant.


Do you think WebTransport and HTTP3 could provide better alternatives for transport?


Well put


Glad to here, also thought I was alone :)


toml-rpc anyone? :)


Yes, the protocol seems fine to me in and of itself. It's the transport portion that seems to be a dumpster fire on the HTTP side of things.


This article feels like an old timer who knows WebSockets just doesn't want to learn what SSE is. I support the decision to ditch WebSockets because WebSockets would only add extra bloat and complexity to your server, whereas SSE is just HTTP. I don't understand though why have "stdio" transport if you could just run an HTTP server locally.


I'm confused by the "old timer" comment, as SSE not only predates WebSockets, but the techniques surrounding its usage go really far back (I was doing SSE-like things--using script blocks to get incrementally-parsed data--back around 1999). If anything, I could see the opposite issue, where someone could argue that the spec was written by someone who just doesn't want to learn how WebSockets works, and is stuck in a world where SSE is the only viable means to implement this? And like, in addition to the complaints from this author, I'll note the session resume feature clearly doesn't work in all cases (as you can't get the session resume token until after you successfully get responses).

That all said, the real underlying mistake here isn't the choice of SSE... it is trying to use JSON-RPC -- a protocol which very explicitly and very proudly is supposed to be stateless -- and to then use it in a way that is stateful in a ton of ways that aren't ignorable, which in turn causes all of this other insanity. If they had correctly factored out the state and not incorrectly attempted to pretend JSON-RPC was capable of state (which might have been more obvious if they used an off-the-shelf JSON-RPC library in their initial implementation, which clearly isn't ever going to be possible with what they threw together), they wouldn't have caused any of this mess, and the question about the transport wouldn't even be an issue.


> I'm confused by the "old timer" comment

SSE only gained traction after HTTP/2 came around with multiplexing.


HTTP call may be blocked by firewalls even internally, and it's overkill to force stdio apps to expose http endpoints for this case only.

As in, how MCP client can access `git` command without stdio? You can run a wrapper server for that or use stdio instead


> As in, how MCP client can access `git` command without stdio?

MCP clients don't access any commands. MCP clients access tools that MCP servers expose.


Here's a custom MCP tool I use to run commands and parse stdout / stderr all the time:

    try {
        const execPromise = promisify(exec);
        const { stdout, stderr } = await execPromise(command);
        if (stderr) {
            return {
                content: [{
                    type: "text",
                    text: `Error: ${stderr}`
                }],
                isError: true
            };
        }
        return {
            content: [{
                type: "text",
                text: stdout
            }],
            isError: false
        };
    } catch (error: any) {
        return {
            content: [{
                type: "text",
                text: `Error executing command: ${error.message}`
            }],
            isError: true
        };
    }
Yeah, if you want to be super technical, it's Node that does the actual command running, but in my opinion, that's as good as saying the MCP client is...


The point is that MCP servers expose tools that can do whatever MCP servers want them to do, and it doesn’t have to have anything to do with stdio.


Must have used GraphQL as a role model no doubt


GraphQL is transport agnostic


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: