Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get and agree with what the author is saying here, but I also think a big part of this is that in software engineering, so much of what we do is ephemeral. If you're a carpenter you'll know if you're good or not. You'll be able to do stuff like frame a house, replace a door, etc. And then when someone asks you how long it will take to frame a house, how much it will cost, what supplies/staff you need and so on, you'll be able to say.

We've been doing CRUD in our industry for decades. How can we not just say "this is how you do CRUD, we're done w/ that now". We've been doing data serialization for decades now. How can we not just say "this is how you serialize"?

There are communities where this is the case. Why have we abandoned them? Why have we abandoned that knowledge and experience to reimplement things in language X or using platform Y?

We might not like to hear it, but my guess is it's a culture problem. They say the way to get ahead at Google is to build a new successful product. Is that the same thing we're doing? It's easier to get ahead by building a new Z framework than to become a core committer on X framework from 10 years ago? Are most X frameworks run by toxic communities? Is there something specific about software that means tenured projects become less and less useful/maintainable/understandable over time?

There's something in here that's specific to SWE. I don't know exactly what it is but, I think we should figure it out.



CRUD and data serialization is an incredibly broad and diverse field. It's like saying "we've been moving our hands and feet for centuries, how can we not just say: this is how we move our hands and feet?"

Software has so many more possibilities to explore than carpentry which is constrained by our current physical technology. It's far better to encourage engineers to explore these diverse possibilities than to encourage conformity and allegiance to some singular path that everyone is supposed to agree on and work towards. You would simply miss a lot of different innovations by grinding away on the same path. Communities that do so just stagnate.


This is my way of thinking: 99.99999% of applications out there still will store their CRUD into a standard relational DB and run on standard operating system with standard protocols.

Sure you can create a lot of fuss all around it, but I feel we create a lot of fuss because of ego, because we want to be perceived that we came up with new ways.

The reason to not conform is ego. Software is perhaps the cheapest ego boosting tool ever created.


Good. The only people who come up with new ways are the people who have the ego to try, and the world is richer for it. I'm glad the world is filled with engineers who try and fail and learn instead of those who would rather not create a fuss.


I think developing new ways to do CRUD is great but as an industry we take it too far.

I worked at an agency that produced CRUD apps at a rate you wouldn't believe. Every task was correctly estimated to the nearest hour. Add xyz entity 2hrs, add xyz frontend widget 3hrs, change deployment pipeline 4hrs etc. This was possible because they picked a tech stack and stuck with it.

I've also worked at companies where doing the same task could be 2 or 3 days. A place where no task can be estimated smaller than 1 day. The reason being the infrastructure, deployment pipeline, tech stack etc is overcomplicated. Way too much overhead.

Unless you are building some massive scalable solution all you need for BE is Spring/Django/.Net and an SQL server with a single backend dev who knows his stuff. Frontend you might need to change frameworks more often but still you can go a solid 2-3 years building momentum before needing to switch.


I feel your pain.

Especially on the .NET side.

A general history of CRUD in .NET:

- Basic ADO.NET (Not too different from JDBC/ODBC, direct commands)

- First Gen ORMs; Linq2Sql (functional but only on SQL server, and missing some features)

- Entity Framework (4-6) /NHibernate. Lots of people wound up hating this, so they went to

- Dapper. Dead simple; Takes SQL and only maps the results back. Everyone loves it.... Similar abstractions are created over Linq (linq2db, SqlFu) as well, with less (but happier) adoption.

- EF Core is released. Everyone switches back over again.

The whole thing is silly.


Yeah, all the churn costs more time and resources than it saves. I personally just stayed with Dapper, simple and flexible. I think people have a problem with judging tech based on any benefit rather than cost benefit analysis. People also value cuteness and elegance in doing 'common' tasks over conceptual simplicity and a similar degree of ugliness for all operations.


Yeah this is what I'm thinking. Yeah sometimes we need to figure out how "doors" work on the International Space Station, but 99.99999% of the time you buy a door kit from your hardware store and you're done. Same with serialization or CRUD or whatever, yeah maybe you do have really interesting requirements that are open research questions. But that's rare.

We're verging towards this, "No Code", PaaS, FaaS, Zapier, etc. I'd be super surprised if there were lots of CRUD jobs in the industry in 10 years.


In 10 years there will still be plenty of companies that never adopted "current" trends.


Eh, yeah that's a fair point. I wonder if starting at one of those companies will be like walking into one of those houses built by an eccentric after a while though.


Probably more like a house built 100 years ago. I bought a made-to-measure blind for my flat a few weeks ago. Followed the instructions, went to attach it to my window frame only to find out that my window frame bows so much that the metal bar won't actually attach to the wall. Stuff like this is rampant in non-modern build housing, not just eccentric built.


In houses upkeep matters more than age. 2 out of 3 buildings I lived in are about a 100 years old (not present in map surveyed in 1914, present with right house numbers on map surveyed inbetween 1920 and 1924), and my current flat is in a 75y old building. Reinforced concrete skeleton, and the rest is brick. Best flats I ever lived in, the brick structure dampens the sounds well, and the high ceilings/tall windows let in a bunch of natural light.


Humans have been constructing houses and doing maintenance on them for a few thousand years, but we've only been writing software for a few decades. We certainly didn't reach out current process for framing houses in the first few decades of carpentry.

That being said, I assume that the first few decades of carpentry didn't undergo as many changes as software has in its first few decades. My theory is that software changes so quickly because it can be bootstrapped. When framing a house, you can learn from the process so that you can make the next frame better by changing the process, but the output of that process (the house frame) doesn't directly affect the next time you attempt it. On the other hand, you can write software that invents an entirely new process for editing software (e.g. a compiler or interpreter), which then you can use to write software that might not have been possible before. You can then repeat this process with the new software, creating yet another paradigm for writing software, and so on. More generally, when a process produces a tool that can then be used for to do the process in a new way, the process will be able to evolve much more quickly than if updating the process can only be updated with the output from other processes.


> They say the way to get ahead at Google is to build a new successful product. Is that the same thing we're doing? It's easier to get ahead by building a new Z framework than to become a core committer on X framework from 10 years ago?

A Kurt Vonnegut quote comes to mind:

"Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance."


I think the reality is that some people are actually just fine doing the maintenance - but they're unlikely to boost their career/paycheck by doing so comparable to what they'd have gotten from making a new thing instead. And that's an issue.

I'd love to go back to old code with the benefit of deeper domain knowledge and greater understanding of my tools and be able to make products even better. However, it's hard to square that against making +20% earnings by helping build a new chat app.


> some people are actually just fine doing the maintenance - but they're unlikely to boost their career/paycheck by doing so comparable to what they'd have gotten from making a new thing instead

Is that really the case? Forums like this look down on maintenance a lot. But I find that real world companies much less so.


Aside from cleaning and lubrication, a lot of "doing maintenance" is still throwing away old material and bringing in new. Just being selective about exactly which part is at the end of its duty cycle.

People talk about it like there's something wrong when, at any given time, a few microservices are being rewritten. But I would expect that for a sufficiently large machine, on any given day a few parts are being replaced.


Yes, but... Job security in this industry boils down to little more than evolve or die.


If someone wants a website that lists their company hours and has a contact form, that’s pretty much a known amount of hours for an experienced web dev. That’s about the equivalent of asking a carpenter to put in a door.

If someone wants a custom built order and inventory management system, that’s like asking a carpenter to build a custom 4 story house from some napkin sketches.

The whole reason computers are valuable is because they automate away all of the rote, repeated, predictable stuff. The unpredictable part of SWE is not comparable to carpentry, it’s more easily compared to architecture/engineering where the problem statements are vague and most of the job is getting agreements on what the thing will actually be. The carpentry part of programming is mostly predictable.


Agreed. Or in case someone brings up the good ol' "civil engineering" analogy - programming isn't like constructing a bridge. Constructing a bridge is what compilers do. Programming is the design and engineering that results in a blueprint. And our occupation is unique in that the construction part is so cheap, we can design iteratively, instead of actually thinking about what we're doing.


>There's something in here that's specific to SWE. I don't know exactly what it is but, I think we should figure it out.

It's changing requirements. When you build a house, people don't come in 6 months later and ask you if you could make one small change by placing some jet engines on the walls so the house can fly somewhere else during the summer. It's just a small change, right?

The problem is that in code, it often is a small change. Or at least, it is possible to make one quick adjustment to satisfy this new use-case. But often, these small changes are also a hack which doesn't fit into the previous overall design and which would've been implemented in a completely different way had the requirement been designed for in the first place. Now, one of these "small changes" don't tend to kill the product, but years or even decades do. That's why refactoring exists in software engineering, but not really in home building. Well, in some sense it does exist by renovating. But nobody thinks it's a good idea to completely renovate a house 25 times around an architecture that just doesn't work anymore for what it's being used for.

If you build a piece of software for exactly one well specified use case and only use it for that, it'll probably run really well forever. But (almost) nobody does that.


The differences between carpentry and software engineering is that the problem space in carpentry is much smaller and pretty much static over time. It's rare for carpentry tools to get an order of magnitude more powerful over the course of a decade, and for 100 people to work on the same carpentry project.


> The differences between carpentry and software engineering is that the problem space in carpentry is much smaller and pretty much static over time.

Anyone is free to compare software development with any engineering field, which typically have to solve large problems.

Thus if you feel carpentry is not a good comparison them look into civil engineering.

And no, the key factor is not the 'power' of software tools. The key factor is stuff like processes and standardization.

Sometimes it feels like software developers struggle or even completely oppose adopting and establishing processes and standards of doing things. Hell, the role of software architect is still in this very day and age a source of controversy, as is documenting past and future work.

We're talking about a field that officially adopted winging it as a best practice, and devised a way to pull all stakeholders into its vortex of natural consequences as a way to dilute accountability. The field of software developme t managed to pull it off with such mastery that even the document where the approach is specified is not a hard set of rules but a vague "manifesto" that totally shields their proponents from any responsibility of its practice delivering poor results.

If an entire field struggles with the development and adoption of tried and true solutions to recurrent problems then it isn't a surprise that basic problems like gathering requirements and planning is something that is still the bane of a whole domain.


People make the civil engineering comparison all the time, but is software development really that much less standardized?

What's standard in building a bridge? You have some physical constraints(length, what's the bridge crossing), material properties, environmental constraints(temperature, weather, wind, what soil are you building on), what kind of traffic. Then there are standard 'shapes'(though it's your choice of suspension or whatever). You then have a ton of standard tests that you run to check that the bridge is fit for purpose. But it's not like the bridge is built out of legos, and even if a lot of standard subcomponents are used the assembly will still end up being fairly unique due to every location being different.

Software does in fact have tons of standardization. No one thinks of processor arch when doing web dev. Or DB implementation. Or how you modify the dom(there are a handful of libraries to choose from, similar to a handful of bridge designs).

How do you make a CRUD app? You can do some arthouse project, or you can just use Rails or various Rails-like frameworks. They're all mostly equivalent.

How do you serialize data? JSON(before that XML, I guess). Yes, you can do something different, but you can also build an apartment building out of reclaimed wood.

The real uncertainty lies at the User Interface, which really isn't engineering driven, it's fashion and art and stylistic architecture. So yes, the way websites look tends to change and be fuzzy, but so do clothes and no one complains about that.

I think software people both overestimate the standardization of physical engineering and underestimate the complexity of physical engineers' decisions, presumably they're not just following a recipe.

TLDR: When software standardizes a tool or process it becomes invisible, an import and forget somewhere in the pipeline of tools we use. This makes it seem like there's a lot of churn. But the churn is a bit of froth on the top of really solid foundations. Yes we're always working in the churning part, but that's because the foundational part requires almost no interaction at all.


Ok, let's take another angle on this. The fundamental difference between software and most other engineering domains is that software doesn't involve physical matter (at least directly). The standards and design patterns in civil engineering, mechanical engineering, etc are driven by physical constraints. Whether it be monetary cost for constituent parts, or time cost for delivery, or just the limits of physics in general. Many of these limits are non-existent in software. There is no physical weight to a software object. A poor 10 year can make a million copies of it as easily as a rich software company.

Now there is software that tightly follows specs and standards, and you typically find it in critical systems, such as medical and aerospace. But there are orders of magnitude more software projects than non software engineering projects because they require so little to instantiate. There is almost no barrier to entry with software, and no BOM, and no supply chain.

Perhaps it would help to only call a subset of software projects as "engineering" - that would solve the problem. Not all software needs to be engineered. I don't need to engineer a script that downloads some videos for me or my personal website. And that's not a bad thing.


The availability of inexpensive CNC machines and 3D printers are carpentry tools that have certainly bolstered productivity in the last 10 years. Probably not “order of magnitude more powerful”, whatever that means, but as one very successful carpenter friend put it: “I don’t even fuck around with table saws anymore”.

By contrast, I’m still writing code more or less the same way I was 10 years ago, with mostly the same tools, and have not seen “order of magnitude” level of anything contributing to my productivity.


Basically, this and other comments show that the analogy completely breaks down. The scales, changes in scales, and degrees of freedom are just utterly different from anything physical humans build.


> It's easier to get ahead by building a new Z framework than to become a core committer on X framework from 10 years ago?

Sometimes, yes.

This particular angle is explained in the article

>> This is ego distraction in action. Self comparison determining effort. If we feel like we’re ahead we continue to put in the effort. If we feel like we’re not, we determine it’s not worth the effort.

The reason people would prefer working on a newer project/framework/whatever is that there is a higher chance they might be able to contribute meaningful code / support. I am admitting to that, and I am sure many have similar thoughts. It is purely guided on where one thinks success is achievable.

Also keep in mind - progress is being made. Python is clearly more productive that Perl. Or Django vs CGI/FastCGI. So 15 years ago, if that were my two choices for two projects, I would have taken the path of Python. Not just because it was new & shiny then.

Fast forward a decade, Go is clearly more productive than many things that came before. Kafka is clearly easier to manage than home-grown queues via databases and flat files. So why should I stick to old process?

The problem I feel is lack of arriving at any standards for anything basic. We have 10 message queues, but limited interoperability. We have 50 popular databases, but no easy migration. We don't even have universal support for Parque in all languages even though it has been around for a while. When can I grep a parque file? Something as simple as Azure blobstore and Amazon S3 can be linked together without arcane and inefficient copying.


One of the biggest difficulties of ego in software comes from the difficulty of finding "the ground".

New languages are popular. Why are they popular? "Because they are better." But in every other domain of software we also say "The best technology doesn't always win." Why would languages be any different? What if Go is, in fact, Worse is Better? And if it's a Worse is Better, then what is the Right Thing?

Ultimately, I think most programmers, given enough experience, eventually settle on a style and propel the style through the language, not the other way around. And to that end, there can always be new languages so long as there are styles of coding that remain unaddressed.

But this is counterbalanced by the assumption of a rationalist project existing: that code is made to be shared, and to be shared, it must be standardized.

If one looks at the hardware/software ecosystem, it is not rationalist in the slightest, though. It is a merciless field of battle where vendors manuever against each other to define, capture, control, and diminish standards. The small ones seek to carry a new standard; the large ones absorb the standard into their empire.

Software bloat is a result of this: everything must go through a compatibility layer, several times, to do anything. Nobody understands the systems they use. With each wave of fresh grads, another set of careers is launched, and they join in on the game and add more to the pile.

In that light, rational standards do not exist. They are simply the manifest ego of "us and them", and therefore are mostly a detriment for all the reasons that ego is a detriment.

There exist several examples of excellent feature scaling from small codebases: VPRI STEPS, various Forth, Lisp, and Smalltalk systems, project Oberon, and microkernels such as Minix. The quality they all share is an indifference to standards: they are sometimes used where convenient, but are not an object of boasting.

Therefore I currently believe that developers should think of standards as reference material, not ends in themselves - that is, you use one if you can't come up with a better way of getting the result.


Did you mean Apache Parquet? If not, what is Parque?


[flagged]



From my perspective, the root cause of this problem lies in lack of one common measure for code quality, correctness and usability or even programming in itself.

Say, there are two approaches for a problem - how do we decide which one we go with? In the last 10 years I have not seen a single case where the decision was made based on something other than subjective opinions of a person or a group of people. "Past experience", "this is how it's done here", "this is the only way I can do" and countless other reasons - all of those are subjective and cannot be used for objective comparison of approaches.

You could say, "days to implement" or "money spent" is such metric - but then, there are no reliable ways to mathematically calculate this measure for any code you plan to write and then prove it in advance.

To put it another way - there is no standard unit of code/system correctness, by which we could have measured what we are actually doing or plan to do. Until one emerges, we are bound to continuously reimplement same things over and over again, justifying it by nothing else than our projections, prejudices and boundless ego.


I agree it's a culture problem - we developers can't agree on anything even when someone else already went to the trouble of defining a standard. I also think there is another component which is inherently related to the software engineering profession: technology move fast and some things are indeed worth adopting because they are beneficial in the long run, even if it means reinventing the wheel or having to re-learn something from scratch. But understanding which is which it's not that simple. Every time I start a new project in the team, we need to learn a new way to deploy, to instrument the code for metrics, a new integration test framework, the new features of the the CI/CD pipeline which replaced the old ones, maybe a new framework or even a new language. This is even before writing any meaningful code. How much of the new stuff is an improvement, rather than just a slightly different flavor of the old stuff?


> Is there something specific about software that means tenured projects become less and less useful/maintainable/understandable over time?

Complexity. Understanding a legacy codebase is pretty much a small-scale research project. You need to gain domain knowledge, become familiar with the team, get acquainted with the codebase and its history, before you'll be able to reliably tell bad code from clever solutions to tough problems. The longer a codebase is developed, the more is there to learn and retain in your head. It very quickly becomes just too much, which means onboarding people takes a lot of time, and day-to-day development also involves being extra careful, or creating obscure bugs - both of which make the project take longer.

> They say the way to get ahead at Google is to build a new successful product. Is that the same thing we're doing? It's easier to get ahead by building a new Z framework than to become a core committer on X framework from 10 years ago?

Yes and no. Not every one of us plays the office politics. Some of us code because we like it. The yardstick then is one of personal growth, the ability to comprehend and build increasingly complex, powerful and beautiful systems, or automate mundane things faster and faster.

But, regardless of the "core drives", one thing is true: building a system from scratch is a much faster way to learn about the problem domain than trying to understand someone else's system and maybe contributing a patch somewhere. We learn by doing. That's why there's so many half-baked libraries for everything out there. Yes, there is ego involved - particularly for people who go out of their way to make their half-baked libraries seem production ready - but a big part of the picture is still that programmers learn by writing code and building systems.

(The difference from most other professions is that people there can build stuff xor share stuff - not both at the same time.)


I disagree that the cause of the problem is complexity stemming from size, and propose that the real issue is the industry's poor history of efficient documentation. Processes to efficiently create and read documents that describe large systems are rarely in place at most of the places I've seen. That's probably the biggest barrier to contributing code to old framework Y. It's just easier to develop framework Z. I agree with you that some things can be designed to reduce complexity, but ironically whenever something like this happens, someone from an older product will glean ideas of the new one and port some of those concepts over (potentially further proving that the big problem is the lack of resources for understanding)


Technology changes and user expectations change, and we need to adapt.

And it's not my area, but this seems to be true in construction as well? The building codes change, and available materials and components change, as do their relative prices. Maybe not as fast, but fast enough to make older books out of date.


Have to disagree with most of this. Technology changes and user expectations change, but there’s a missing link here to show that either of these really necessitates Yet Another Language/Framework, launching Yet Another Product/Service, or rebuilding things from the ground up. It’s a bit like a homeowner wanting an updated kitchen and a contractor telling them they need a whole new house for it to work, when really the contractor just prefers building flashy new houses for their portfolio over doing renovations on a budget.

Also, side note: with respect to carpentry, books from 50+ years ago on wood working techniques, framing, joinery, etc. are perfectly relevant today. And many of my grandfather’s tools are still in use in my workshop.


But "good carpentry" is primarily a judgement made based on of physics, with some haptics and design psychology and (hopefully not) entomology.

Humans are pretty good at physics. At the layer of abstraction where carpenters work, our predictive ability is solid.

What fields of science are the primary judges of "good software"?

> Programs must be written for people to read, and only incidentally for machines to execute > -- Harold Abelson

So it is pretty much _all_ psychology and cognitive science.

Humans are not yet that good at cognitive science because brains are complicated. There is real disagreement about how Working Memory operates -- and Working Memory is core to why modularity matters!


>We've been doing CRUD in our industry for decades. How can we not just say "this is how you do CRUD, we're done w/ that now"

As an analyst, can you explain this bit?

I keep hearing things like "that's not actually a software development job, just CRUD", "we're done with doing CRUD" etc. But it seems like between the application and the DBA all the CRUD is taken care of, wouldn't the developer just work on the application itself? And isn't saying "we don't do CRUD anymore" somewhat akin to saying "we don't do [+-*/] anymore"? How can you have persistent data without CRUD? I must be missing a piece of the puzzle in this discussion.


It's a reductive, dismissive way of thinking, like saying that everything is ones and zeros, or that we're just copying protobufs around.

The data that we manipulate has business meaning and there are consequences for the users that arise from how we model things. Consider the genre of articles like "Falsehoods Programmers Believe About Names" [1]. There is ridiculous complexity here, for those willing to see it, but some people get tired of it.

[1] https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-...


Not OP, but my take: When people talk about "CRUD" in the way you describe, they're usually talking about one of two separate (but related) things.

The "it's not _actual_ development" framing is usually directed at applications which "only" allow users to perform basic actions on some data, basically UIs for manipulating a database. It is absolutely real development (in my view), but less sexy than AI/ML, big data, etc, etc.

You are correct that every application (with some sort of data persistence) needs CRUD. But how CRUD is implemented, for better or for worse, depends on the requirements of the application storing the data. For (most) relational databases, the low-level "how do I CRUD" is well defined: standard SQL queries. But if I use NoSQL, or flat files, or something; it changes.

The definition of CRUD also varies depending on the layer of abstraction within an application or the perspective of the user/developer. For example: from a DBA's perspective, CRUD is SQL queries. From a UI, CRUD might be a JSON API or GraphQL endpoint. From a server-side application, CRUD might be a specific ORM library.


Yeah, CRUD is a solved problem but you still have to do it.

Mapping state to the database is to web dev what applying paint to the canvas is to painting. It’s how you do it that counts. Saying otherwise is overly reductionist.

Frameworks exist that abstract CRUD away. But you end up sacrificing UX and / or flexibility.


Picking the right level and nature of abstraction for the problem at hand is something of an art. Too high and you'll straitjacket yourself. Too low and you'll spend most of your time maintaining ugly boilerplate.

One of the many reasons why CRUD is way harder than its reputation credits it with.


I suspect their point is more: if it's a solved problem, why do we keep making new ways to do it?


CRUD is looked down upon because it's time consuming and repetitive when you do it with poorly designed tools and because it's the most common role.

I think it's mostly a class thing though. Test automation is similarly looked down upon even though it is often much harder to do right than regular coding.

There is a definite pecking order when it comes to programmer roles and it's not necessarily related to difficulty (although it correlates very strongly with pay).


I remember reading an article that studied junior and senior devs and discovered that there was no way to get better at debugging. No matter how much experience someone had, their ability to problem solve was about the same.

I think that might have to do with this complexity, but also: software has so many ways of doing something, even within the same language -- and that gets permuted across, say, five different languages (Python, Rust, PhP...). It's impossible to say the "right" way to do it because there are multiple ways to achieve a valid result that's readable, AND there is a margin for disagreement between what is "readable".


I feel this needs better context, because besides not being able to prove a negative, debugging is so much beyond only the essential ability to "problem solve". And as an anecdote, I've certainly gotten significantly better at debugging with my experience among many aspects. For instance the ability to recognize a somewhat common bug based on it's symptoms is something that at least within a certain context, improves with experience and is at least to some degree "getting better at debugging"


I’d love a link to that article if you can find it. I wasn’t able to on my own.

I was just thinking today about how to teach someone to be better at debugging.


Well written article, well written response. Sometimes us humans think that perceived improvement of conditions = improved conditions. This is false, but as a business guy calling the shots, my goal is to do the thing on paper in front of me by the deadline whatever the costs. Combine that with a developer's creativity and you get a new framework.


I'd venture it's not so much the technique behind the individual layers but the understanding of the need for all the layers and their interactions and the best practices in given situations.

We're prone to tediously repeat the same conversations over and over and take the cosmetic approach rather than the fundamentals-first way of doing things.


i think it's all there if you commit to using tech more than 10 years old.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: