Hacker News new | past | comments | ask | show | jobs | submit login

Nice Post. But everyone needs to understand something. Even if you follow these principles to the letter T, you can still produce very bad software. In fact you can also find many cases where people did the exact opposite of what this guy said and still produced great software. I'm sure many people can name examples of software that just came together out of blind luck.

Why?

Because there is no formal definition for what is bad or good software. Nobody knows exactly why software gets bad or why software gets good or what it even exactly is... It's like predicting the weather. The interacting variables form a movement so complex that it is somewhat impossible to predict with 100% accuracy.

What you're reading from this guy is the classic anecdotal post of design opinions that you literally can get from thousands of other websites. I'm seriously tired of reading this stuff year over year rehashing the same BS over and over again, yet still seeing most software inevitably become bloated and harder to work with over time.

What I want to see is a formal theory of software design and by formal I mean mathematically formal. A axiomatic theory that tells me definitively the consequences of a certain design. An algorithm that when applied to a formal model produces a better model.

We have ways to formally prove a program 100% correct negating the need for unit tests, but do we have a formal theory on how to modularize code and design things so that they are future proof and remain flexible and understandable to future programmers? No we don't. Can we develop such a theory? I think it's possible.




So you're not talking about "formal methods"? https://en.wikipedia.org/wiki/Formal_methods

The Applied Category Theory folks have some very interesting stuff, like Categorical Query Language.

https://www.appliedcategorytheory.org/

https://www.categoricaldata.net/

But it sounds to me what you mean is more like if "Pattern Language" was symbolic and rigorous, eh?


Yes, this is exactly what I mean. Though I feel patterns can be formalized within thr framework of category theory.


Have you read "Introduction to Cybernetics" by Ashby?

(PDF available here: http://pespmc1.vub.ac.be/ASHBBOOK.html )

Cybernetics might be the "missing link" for what you're talking about.


I didn't dive to deep into this so I could be wrong but this looks like control theory with elements of category theory.

I'm looking more for a theory of modules and relationships. Something that can formalize the ways we organize code.


From my POV control theory is rediscovering cybernetics, but yeah.

It sounds like CT is what you're after (to the extent that we have it at all yet...)


We know a great deal about dynamics,kinematics, thermodynamics and generally the physics that governs car components, yet we are a long way from an algorithm that applied to a car will produce a better car. My guess is that doing that for software is as hard, if not harder.

Also the sentence 'algorithms that applied to algorithms produce a better model' has a strong smell of halting problem, at least to this nose.


I get where you're coming from. I think your intuition is off.

Intuitively, software can be modeled as a graph of modules with lines representing connections between modules. An aspect of "good software" can be attributed to some metric described by the graph, let's say the amount of edges in the graph... the less edges the less complex. An optimization algorithm would probably take this graph as an input and output a graph that has the same functionality but less edges. You can call this a "better design." This is all really fuzzy and hand wavy but if you think about it from this angle I'm pretty sure you'll see that a axiomatic formalization can be done along with an algorithm that can prune edges from a graph (or in other words, improve a design by lowering complexity)

A computer program is a machine that translates the complexity of the real world into an ideal system that is axiomatic and highly, highly simplified. Such a system can be attacked by formal theory unlike real world issues like what constitutes a good car.


The halting problem bit is a shower thought with no supporting evidence whatsoever, so your complexity lowering scenario may well be doable. However, paring complexity is a strictly developer-side measure of goodness (that is assuming that the low complexity result is still readable, maintainable...) - we can agree that reducing bugs is also a very good user side metric, but that tells only a (little) part of the story.

In my experience, developer-side evaluation has a very low impact (I was about to write: zero) on the perceived and actual goodness of the software itself. Which is tied mostly to factors such as user experience, fit to the problem it was designed for and to the organization(s) it is going to live in (user experience again). These properties do not strike me as amenable to algorithmic improvement, no more than "pleasant body lines and world class interiors" in the original car analogy. But they are a (big) part of good software design, besides being the 'raison d'etre' of the darned thing to begin with.

But let's forget cars, as hard as it is. Few months ago HN was running the story about developing software in Oracle. Now, Oracle may be by now a little soft around the edges, but I think that most would agree that it has been setting the standard for (R)DBMS for decades. Success may not on itself be the tell-all measure of software goodness, but the number of businesses that have been willing to stake the survival of their data on Oracle is surely a measure of its perceived goodness (as that other elusive factor - hipness - tends not to be paramount in the DBMSs business).

The development side story, taken as face value, was pure horror (https://news.ycombinator.com/item?id=18442941). Everything in it spoke bad, outdated, rotting design. The place must be teeming with ideas on how to improve just about everything in that environment. And yet if that came to be, maybe by some nifty edge pruning algorithm, it would do nothing to improved the goodness-to-the-world measure of the software, not until the internals' improvement translated to observables in the user base experience.That type of improvements will still require vaste amount of non-algorithmic design and, in the meantime, a very concrete risk will be run of deteriorating the overall user experience (because ehi, snafus will happen).

This (internals are just a small part of the story) is one of the reasons why so many reimplementations I have seen failed ("ehi, let's rewrite this piece of shit and make it awesome") and the reason because everyone resists the move from IPV4 to IPV6. I could think of many more examples.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: