Hacker Newsnew | past | comments | ask | show | jobs | submit | vb-8448's commentslogin

It's 5th of feb 2026, and we already get our monthly "just use postgres" thread

btw, big fan of postgres :D


> Software development, as it has been done for decades, is over.

I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.

For sure, we are going through some big changes, but there is no "as it has been done for decades".


I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.

I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.

I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.


At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.

Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.


Isn’t there a whole ton of memes about the increase in complexity and full stack everything and having to take in devops, like nothing has changed at all?

I don't think that's true, at least for everywhere I've worked.

Agile has completely changed things, for better or for worse.

Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.


MVP is not necessarily opposed to engineered and considered. It's just that many people who throw that term around have little regard for engineering, which they hide behind buzzwords like "agile".

Yeah, I remember being amazed at the immediate incremental compilation on save in Visual Age for Java many years ago. Today's neovim users have features that even the most advanced IDEs didn't have back then.

I think a lot of people in the industry forget just how much change has come from 30 years of incremental progress.


But this time it will be different! This will be a huge change!

They always say and are saying again


1995 vs 2005 was definitely a larger change than subsequent decades; in 1995 most information was gathered through dead trees or reverse engineering.

Maybe at the beginning ... but with time? who knows ...

Btw, the end game is probably having ads in the llm context .... or directly in the llm training set.


Ads will lower the quality of the training data, an RAG is more likely. Pay to get your product's INSTALLME.md ranked under some specific semantic vectors.


I think they are completely screwing up the AI integration.

After years of JetBrains PyCharm pro I'm seriously considering switch to cursor. Before supermaven being acquired, pycharm+supermaven was feeling like having superpowers ... i really wish they will manage to somehow catch up, otherwise the path is written: crisis, being acquired by some big corp, enshitification.


JetBrains has AI support. It's a bit janky right now, but it is definitely getting better.

They have an MCP server, but it doesn't provide easy access to their code metadata model. Things like "jump to definition" are not yet available.

This is really annoying, they just need to add a bit more polish and features, and they'll have a perfect counter to Cursor.


The polish is what they seem to have trouble with lately.

I much prefer their ides to say vscode, but their development has been a mess for a while with half-assed implementations and long standing bugs


I'm biased (work at Cognition) but I think it's worth giving the Windsurf JetBrains plugin a try. We're working harder on polish these days, so happy to hear any feedback.


augmentcode has a great plugin for pycharm (and all jetbrains products) if you don't want to throw the baby out with the bathwater.


Actually currently I'm using augment, it's good, but still subpar when compared to old supervmaven or cursor.

One thing that I'm really missing is the automatic cursor move.


Interesting, I have completely stopped using the editor at this point and do everything through the agent except reading diffs.


I have running subscriptions with both claude and codex. They are good but, at least for me, don't fully replace the coding part. Plus I tend to lose focus because of basically random response time.


what about making python 5x faster(faster-cpython project)?


There's some nice improvements expected by 3.16. See https://fidget-spinner.github.io/posts/faster-jit-plan.html


> faster-cpython project

Seems to have died the same death as Unladen Swallow, Pyston, etc:

https://discuss.python.org/t/community-stewardship-of-faster...


I'm the author of the thread you linked. Community stewardship is actually happening in some form or another now.

3.15 has some JIT upgrades that are in-progress. This has a non-exhaustive list of them https://docs.python.org/dev/whatsnew/3.15.html#upgraded-jit-...


You can still outsource up to VM level and handle everything else on you own.

Obviously it depends on the operational overhead of specific technology.


In the engineering team velocity section, the most important metric is missing: change rate of new code or how many times it is change before being fully consolidated.


This is a great suggestion. I'll note it down for next years. Curious, do you think this would be a good proxy for code quality?


I would consider feature complete with robust testing to be a great proxy for code quality. Specifically, that if a chunk of code is feature complete and well tested and now changing slowly, it means -- as far as I can tell -- that the abstractions contained are at least ok at modeling the problem domain.

I would expect code that continually changes and deprecates and creates new features is still looking for a good problem domain fit.


Most of our customers are enterprises, so I feel relatively comfortable assuming they have some decent testing and QA in place. Perhaps I am too optimistic?


That sounds like an opportunity for some inspection; coverage, linting (type checking??), and a by-hand spot check to assess the quality of testing. You might also inspect the QA process (ride-along with folks from QA).


It's tricky, but one can assume that code written once and not touched in a while is good code (didn't cause any issues, performance is good enough, ecc).

I guess you can already derive this value if you sum the total line changed by all PRs and divide it by (SLOC end - SLOC start). Ideally it must be a value slightly greater than 1.


It depends on how well you vetted your sanples.

fyi: You headline with "cross-industry", lead with fancy engineering productivity graphics, then caption it with small print saying its from your internal team data. Unless I'm completely missing something, it comes of as a little misleading and disingenuous. Maybe intro with what your company does and your data collection approach.


Apologies, that is poor wording on our part. It's internal data from engineers that use Greptile, which are tens of thousands of people from a variety of industries. As opposed to external, public data, which is where some of the charts are from.


just out of curiosity: if i'm located in spain and i setup an ec2 or digital ocean instance in germany and use it as a socks proxy over ssh, do you will detect me?


It is even easier to block hosting providers. They typically publish official lists. Here's the full list for both of those providers:

https://ip-ranges.amazonaws.com/ip-ranges.json

https://digitalocean.com/geo/google.csv

(And even if they don't publish them, you can just look up the ranges owned by any autonomous network with the appropriate registry.)


It won’t end up in our proxy detection database, but we track hosting provider ranges separately: https://www.iplocate.io/data/hosting-providers/


That's a hosting service IP block. Some sites block them already. Netflix for instance.


Actually it's much less, big corps are using any possibile schema to avoid paying taxes.


just a curiosity: why not trying to put in a loop gemini o gpt and wait until 100% of test suite is passed?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: