why give it root access to your life? i don't use these tools but it seems like you should never give anything that access. if a claw needs email, set up a google account just for it and forward relevant stuff to it. share your calendar with it. whatever, just don't let it "be" you.
access control, provisioning, and delegation have been solved for a very long time now.
The power of voice dictation for me is that I can get out every scrap of nuance and insight I can think of as unfiltered verbal diarrhea. Doing this gives me solidly an extra 9 in chance of getting good outputs.
Stream of consciousness typing for me is still slower and causes me to buffer and filter more and deliberately crafting a perfect prompt is far slower still.
LLMs are great at extracting the essence of unstructured inputs and voice lets me take best advantage of that.
Voice output, on the other hand, is completely useless unless perhaps it can play at 4x speed. But I need to be able to skim LLM output quickly and revisit important points repeatedly. Can't see why I'd ever want to serialize and slow that down.
Yes[1]. Copyright applies to human creations, not machine generated output.
It's possible to use AI output in human created content, and it can be copyrightable, and substantiative, transformative human-creative alteration of AI output is also copyrightable.
> This analysis will be “necessarily case-by-
case” because it will “depend on the circumstances, particularly how the AI tool operates and
how it was used to create the final work.”
This seems the opposite of the cut and dry "cannot be copyrighted" stance I was replying to.
Yes it does depend on the circumstances. You are free to waste your own time to try this at the copyright office, but in my opinion, this project's 100% LLM output where the human element is just writing prompts and steering the LLM is the same circumstance as my linked case where the human prompted Midjourney 624 times before producing the image the human deemed acceptable. The copyright office has this to say:
> As the Office described in its March guidance, “when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the ‘traditional elements of authorship’ are determined and executed by the technology—not the human user.”
Yes, MCP could've been solved differently - eg with an extension to the openapi spec for example, at least from the perspective of REST APIs... But you're misunderstanding the selling point.
The issue is that granting the LLM access to the API needs something more granular then "I don't care, just keep doing whatever you wanna do" and getting promoted every 2 seconds for the LLM to ask the permission to access something.
With MCP, each of these actions is exposed as a tool and can be safely added to the "you may execute this as often as you want" list, and you'll never need to worry that the LLM randomly decides to delete something - because you'll still get a prompt for that, as that hasn't been whitelisted.
This is once again solvable in different ways, and you could argue the current way is actually pretty suboptimal too... Because I don't really need the LLM to ask for permission to delete something it just created for example. But the MCP would only let me whitelist action, hence still unnecessary security prompts. But the MCP tool adds a different layer - we can both use it as a layer to essentially remove the authentication on the API you want the LLM to be able to call and greenlight actions for it to execute unattended.
Again, it's not a silver bullet and I'm sure what we'll eventually settle on will be something different - however as of today, MCP servers provide value to the LLM stack. Even if this value may be provided even better differently, current alternative all come with different trade-offs
And all of what I wrote ignores the fact that not every MCP is just for rest APIs. Local permissions need to be solved too. The tool use model is leaky, but better then nothing.
It's been a while but why is uring not helpful for larger buffers? I'd think the zero-copy I/O capabilities would make it more helpful for larger payloads, not less
uring supports zero-copy, but is not a copy-reduction mechanism; it is a syscall-reduction mechanism. Large buffers mean less syscalls to start with, so less benefit.
Exactly this. The kernel alloc’d buffers can help but if that was a primary concern you’re in driver territory. Anything still userspace kind of optimization domain the portion of syscalls for large buffers in a buffered flow is heavily amortized and not overly relevant.
Scenes from this book have been regularly coming to mind for almost 20 years now. No idea how it holds up for new readers today but it left quite an impact way back then.
Interesting! I wrote a Smalltalk in CL last year that did the same kind of mapping: Smalltalk classes compile to CLOS classes and every message is a CLOS method. Worked great at the start and then performance dropped off as more classes compiled.
I never took the time to get that sorted out but it smells suspiciously similar to what you described here. Thanks for the write up!
What, exactly, is "safe" about TypeScript other than type safety?
TypeScript is just a language anyway. It's the runtime that needs to be contained. In that sense it's no different from any other interpreter or runtime, whether it be Go, Python, Java, or any shell.
In my view this really is best managed by the OS kernel, as the ultimate responsibility for process isolation belongs to it. Relying on userspace solutions to enforce restrictions only gets you so far.
I agree on all counts and that this project is silly on the face of it.
My comment was more that there is a massive cohort of devs who have never done sysadmin and know nothing of prior art in the space. Typescript "feels" safe and familiar and the right way to accomplish their goals, regardless of if it actually is.
access control, provisioning, and delegation have been solved for a very long time now.
reply