Hacker Newsnew | past | comments | ask | show | jobs | submit | more codingwagie's commentslogin

Financial engineering is why people are poor. They are literally competing for goods and services with investment firms.


Meritocracy looks a bit different when individuals standing alone are expected to go toe to toe with multi-industry corporate conglomerates and their franchisees.


and when society is structured so that the people/orgs with lots of resources have unfair advantage over those who don't (e.g. monopoly powers)


I've commonly seen grants like this diluted to almost zero. I would think carefully and strategically about how you can get some sort of value for the equity you vested. Its probably harder than you think to get decent liquidity on it.


Is it legal to dilute this to zero? Can't you sue post-exit (e.g. Saverin and Facebook)?


It happens all the time, I have seen it in NYC. Usually its an early stage thing, cofounder leaves after 1 year etc. Much harder to do with a complicated cap table. Investors I could name even suggest it


Suing would only make sense if the dollar value is very high.


At a normally-papered startup? Yes. No.


The juice has to be worth the squeeze. No sense in fighting against fiduciary duty, minority shareholder oppression, etc., etc. unless there is some sort of value there. This usually means a successful exit before taking action.


I think we're saying the same thing --- that none of this matters, just walk away with the vested shares and be a friend to the company. Diluting his founder shares in subsequent rounds is going to be a nonevent, and diluting him to zero in an acquisition --- unless it's a seller's market or a bidding war --- may be as well. It's just not worth worrying about; I think the only real question here might be "do I take a buyout if offered", and this person is nowhere near that yet.


Ahhh yes same thing


there are deep reasons for why society is not like this anymore


Care to list them?


This works for UX. I give it vague requirements, and it implements something i didnt ask for, but is better than i would have thought of


Yeah but theres also sorts of political implications for admitting that


? Can you expand? What is the political implication?


What do you mean?


The value of these researchers to meta is surely more than a few billion. Love seeing free markets benefit the world


Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?

You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?


Trade secrets and patent laws still apply.

But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.


How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.


It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.


OpenAI SDK is much simpler to use and understand than langchain etc. I havent deployed really complicated agents, but I have a number of simple use cases that work just fine


I love to hate on google, but yeah their models are really good. The larger context window is huge


Doesn't OpenAI's GPT 4.1 also have 1 million context length?


this only works for very early stage


At what point do you think it starts to stop working?


I'm seeing big advances that arent shown in the benchmarks, I can simply build software now that I couldnt build before. The level of complexity that I can manage and deliver is higher.


A really important thing is the distinction between performance and utility.

Performance can improve linearly and utility can be massively jumpy. For some people/tasks performance can have improved but it'll have been "interesting but pointless" until it hits some threshold and then suddenly you can do things with it.


Yeah I kind of feel like I'm not moving as fast as I did, because the complexity and features grow - constant scope creep due to moving faster.


I am finding that my ability to use it to code, aligns almost perfectly with increasing token memory.


yeah, the benchmarks are just a proxy. o3 was a step change where I started to really be able to build stuff I couldn't before


mind telling examples?


Not OP, but a couple of days ago I managed to vibecode my way through a small app that pulled data from a few services and did a few validation checks. By itself its not very impressive, but my input was literally "this is how the responses from endpoint A,B and C look like. This field included somewhere in A must be somewhere in the response from B, and the response from C must feature this and that from response A and B. If the responses include links, check that they exist". To my surprise, it generated everything in one go. No retry nor Agent mode churn needed. In the not so distant past this would require progressing through smaller steps, and I had to fill in tests to nudge Agent mode to not mess up. Not today.


I’m wrapping up doing literally the same thing. I did it step-by-step. But, for me there was also a process of figuring out how it should work.


what tools did you use?


> what tools did you use?

Nothing fancy. Visual Studio Code + Copilot, agent mode, a couple prompt files, and that's it.


Do you mind me asking which language and if you have any esoteric constraints in the apps you build? We use a java in a monorepo, and have a full custom rolled framework on top of which we build our apps. Do you find vibe coding works ok with those sort of constraints, or do you just end up with a generic app?


Okay but this has all to do with the tooling and nothing to do with the models.


I mostly disagree with this.

I have been using 'aider' as my go to coding tool for over a year. It basically works the same way that it always has: you specify all the context and give it a request and that goes to the model without much massaging.

I can see a massive improvement in results with each new model that arrives. I can do so much more with Gemini 2.5 or Claude 4 than I could do with earlier models and the tool has not really changed at all.

I will agree that for the casual user, the tools make a big difference. But if you took the tool of today and paired it with a model from last year, it would go in circles


Can you explain why?


You can write projects with LLMs thanks to tools that can analyze your local project's context, which didn't exist a year ago.

You could use Cursor, Windsurf, Q CLI, Claude Code, whatever else with Claude 3 or even an older model and you'd still get usable results.

It's not the models which have enabled "vibe coding", it's the tools.

An additional proof of that is that the new models focus more and more on coding in their releases, and other fields have not benefited at all from the supposed model improvements. That wouldn't be the case if improvements were really due to the models and not the tooling.


You need a certain quality of model to make 'vibe coding' work. For example, I think even with the best tooling in the world, you'd be hard pressed to make GPT 2 useful for vibe coding.


I'm not claiming otherwise. I'm just saying that people say "look what we can do with the new models" when they're completely ignoring the fact that the tooling has improved a hundred fold (or rather, there was no tooling at all and now there is).


That contradicts what you said earlier -- "this has all to do with the tooling and nothing to do with the models".


Clearly nobody is talking about GPT-2 here, but I posit that you would have a perfectly reasonable "vibe coding" experience with models like the initial ChatGPT one, provided you have all the tools we have today.


OK, no objections from me there.


Chatgpt itself has gotten much better at producing and reading code since a year ago, in my experience


They're using a specific model for that, and since they can't access private GitHub repos like MS, they rely on code shared by devs, which keeps growing every month.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: