Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It tried out vibe coding this weekend. I've been using AI to get smaller examples and ask questions and its been great but past attempts to have it do everything for me have produced code that still needed a lot of changes.

I hit an OpenSearch bug this week where you can't get any browser based requests to work. Its due to zstd becoming a standard part of Accept-Encoding and OpenSearch not correctly supporting it so I wanted to install a browser plugin that modified the browser HTTP request headers to my servers.

I don't know about everyone else but I love that browser plugins are possible but I hate having to find them. Its mostly due to never knowing if you can trust a plugin and even if you find one, you have to worry about it being bought out in the future. With vibe coding I was able to build a browser extension in 45 minutes that had more features than I originally planned for.

I spend more time documenting the experience than building which is wild. If you are interesting you can look at the README in https://github.com/mattheimer/vibe-headers

But I left the experience with two thoughts.

Even seasoned developers will be using vibe coding in the future.

I think in the near future the browser plugin market will partially collapse because eventually browsers will build extensions themselves on the fly using natural language.



In my experience, "vibe coding" can produce a rich prototype very fast.

Then as scope expands you're left with something that is difficult to extend because its impossible to keep everything in the LLM context. Both because of context limits and because of input fatigue in terms of communicating the context.

At this point you can do a critical analysis of what you have and design a more rigorous specification.


I don't disagree but context limits are expanding rapidly. Gemini 2.5 Pro which was used here has a 1 million token context window with 2 million coming soon. Cost will be a concern but context size limits will not.


Totally agree, I mentioned it in another comment but Gemini was a game changer for allowing me to increase the size of the project I can feasibly have AI work on.

Only issue is Gemini's context window (I've seen my experience corroborated here on HN a couple times) isn't consistent. Maybe if 900k tokens are all of unique information, then it will be useful to 1 million, but I find if my prompt has 150k tokens of context or 50k, after 200k in the total context window response coherence and focus goes out the window.


I'd love some more innovation on increasing context size without blowing up RAM usage. Mistral small 2503 24B and Gemma 3 27B both fit into 24GB at Q4, but Mistral can only go up to about 32k and Gemma about 12k before all VRAM is exhausted, even with flash attention and KV cache quantization.


What editor are you using with gemini 2.5 pro? I really don't like their vscode extension.


If your project is well organized and individual files are small and the dependency graph isn’t too crazy, Claude code does an amazing job building only the context it needs even as the project grows. You just have to be aggressive about refactoring for maintainability. The bonus here is that it’s easier for humans to work on too.


But my reading is that the parent wouldn't have tackled this at all without the vibe coding, and would have used an off-the-shelf extension. So in that case, it's a pure win, no?


Yeah I could have framed it better. I was responding to:

>I've been using AI to get smaller examples and ask questions and its been great but past attempts to have it do everything for me have produced code that still needed a lot of changes.

In my experience most things that aren't trivial do require a lot of work as the scope expands. I was responding more to that than him having success with completing the whole extension satisfactorily.


So there is no technical debts ?


That's a difficult question to answer because I don't know if I'll grow the extension in the future. Only time will tell.

After I completed the extension I did try on another model and despite me instructing it to generate a v3 manifest extension, the second attempt didn't start with declarativeNetRequest and used the older APIs until I made a refinement. And this isn't even a big project really where poor architecture would cause debt.

Vibe coding can lead to technical debt, especially if you don't have the skills to recognize that debt in the code being generated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: