Meritocracy looks a bit different when individuals standing alone are expected to go toe to toe with multi-industry corporate conglomerates and their franchisees.
I've commonly seen grants like this diluted to almost zero. I would think carefully and strategically about how you can get some sort of value for the equity you vested. Its probably harder than you think to get decent liquidity on it.
It happens all the time, I have seen it in NYC. Usually its an early stage thing, cofounder leaves after 1 year etc. Much harder to do with a complicated cap table. Investors I could name even suggest it
The juice has to be worth the squeeze. No sense in fighting against fiduciary duty, minority shareholder oppression, etc., etc. unless there is some sort of value there. This usually means a successful exit before taking action.
I think we're saying the same thing --- that none of this matters, just walk away with the vested shares and be a friend to the company. Diluting his founder shares in subsequent rounds is going to be a nonevent, and diluting him to zero in an acquisition --- unless it's a seller's market or a bidding war --- may be as well. It's just not worth worrying about; I think the only real question here might be "do I take a buyout if offered", and this person is nowhere near that yet.
Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.
It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.
OpenAI SDK is much simpler to use and understand than langchain etc. I havent deployed really complicated agents, but I have a number of simple use cases that work just fine
I'm seeing big advances that arent shown in the benchmarks, I can simply build software now that I couldnt build before. The level of complexity that I can manage and deliver is higher.
A really important thing is the distinction between performance and utility.
Performance can improve linearly and utility can be massively jumpy. For some people/tasks performance can have improved but it'll have been "interesting but pointless" until it hits some threshold and then suddenly you can do things with it.
Not OP, but a couple of days ago I managed to vibecode my way through a small app that pulled data from a few services and did a few validation checks. By itself its not very impressive, but my input was literally "this is how the responses from endpoint A,B and C look like. This field included somewhere in A must be somewhere in the response from B, and the response from C must feature this and that from response A and B. If the responses include links, check that they exist". To my surprise, it generated everything in one go. No retry nor Agent mode churn needed. In the not so distant past this would require progressing through smaller steps, and I had to fill in tests to nudge Agent mode to not mess up. Not today.
Do you mind me asking which language and if you have any esoteric constraints in the apps you build? We use a java in a monorepo, and have a full custom rolled framework on top of which we build our apps. Do you find vibe coding works ok with those sort of constraints, or do you just end up with a generic app?
I have been using 'aider' as my go to coding tool for over a year. It basically works the same way that it always has: you specify all the context and give it a request and that goes to the model without much massaging.
I can see a massive improvement in results with each new model that arrives. I can do so much more with Gemini 2.5 or Claude 4 than I could do with earlier models and the tool has not really changed at all.
I will agree that for the casual user, the tools make a big difference. But if you took the tool of today and paired it with a model from last year, it would go in circles
You can write projects with LLMs thanks to tools that can analyze your local project's context, which didn't exist a year ago.
You could use Cursor, Windsurf, Q CLI, Claude Code, whatever else with Claude 3 or even an older model and you'd still get usable results.
It's not the models which have enabled "vibe coding", it's the tools.
An additional proof of that is that the new models focus more and more on coding in their releases, and other fields have not benefited at all from the supposed model improvements. That wouldn't be the case if improvements were really due to the models and not the tooling.
You need a certain quality of model to make 'vibe coding' work. For example, I think even with the best tooling in the world, you'd be hard pressed to make GPT 2 useful for vibe coding.
I'm not claiming otherwise. I'm just saying that people say "look what we can do with the new models" when they're completely ignoring the fact that the tooling has improved a hundred fold (or rather, there was no tooling at all and now there is).
Clearly nobody is talking about GPT-2 here, but I posit that you would have a perfectly reasonable "vibe coding" experience with models like the initial ChatGPT one, provided you have all the tools we have today.
They're using a specific model for that, and since they can't access private GitHub repos like MS, they rely on code shared by devs, which keeps growing every month.