Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a galaxy far far away, there was a kingdom where people didn't know structured programming. Their programming languages only had (conditial) branches, jumps, or GOTOs (as they were colloquially called). And they had a very influential programming book, called "GOTO design patterns". Here's a few examples of design patterns from the book:

- "IF pattern" - If you need to execute code only if some condition is true, you write a conditional GOTO that lands after the code that is to be executed conditionally.

- "IF-ELSE pattern" - If you need to execute code only if some condition is true, and some other code if the condition is false, you use IF-ELSE pattern. It's similar to IF pattern, but at the end of the code in the IF pattern, you add another unconditional GOTO that branches after the code that is to be executed if the condition is false.

- "WHILE pattern" - If you need to repeatedly execute code and check some condition after each execution, you add a conditional GOTO at the end of the code that goes back to the beginning of the code you want to repeatedly execute.

- "BREAK pattern" - This is a pattern to use with the WHILE or UNTIL patterns. If you need to, on some condition, stop repeated execution of the code in the middle of the code, you add a conditional GOTO into the middle of the code that just branches after the repeatedly executed code block.

Everyone was happy with the book. No more messy spaghetti code full of random GOTOs (as was common before the book appeared), because young apprentices memorized all the patterns in the book, and they just applied them rigorously. Also, understanding code became easier, because everybody instantly recognized those patterns in the well written code, and they had a common vocabulary to talk about these things.

One day, a traveler from another world came into this kingdom. He looked at the book and wondered: "Why don't you just add these patterns into your computer language as abstractions? Wouldn't that aid comprehension even more?" Unfortunately, he was executed for the heresy.



This is, and always has been, a poor criticism of design patterns, by people who think they're just a library of code that idiots are supposed to cut and paste to be better programmers.

A common example from the GoF book is the Strategy pattern - most functional programming snarks will jump in and say you don't need this! We have first class functions you can pass around! And it's entirely true that an implementation of this pattern (a good, simple one even) is just accepting functions as arguments - cool, you probably get it! But the _design_ principle is about making strategies _explicit_ - allowing decisions to be made or policies to be enforced in ways you don't anticipate. I still see _tons_ of libraries written by smart people in functional languages that will still accept an enum of some kind to tell them what do in certain situations (e.g. a lot of HTTP libraries' extremely limited retry logic). That's an explicit design decision - to limit options instead of opening them up. Having a language to talk about the _design decisions_ at a higher level, instead of conflating them with language features, is the whole point. Everybody knows you can pass functions around as arguments, that doesn't necessarily prompt you or make it easier to discuss implementing a Strategy.

Design patterns aren't a static, finite set of code snippets, and I actually think things like the linked post that mostly treat them that way aren't helpful. They give you a way of naming and talking about components of a design, possibly at a higher granularity than your programming language (and sure, different languages have different granularities).

We've been hearing this same exact criticism for literally 20 years and it's boring at this point. I get that most people don't bother creating new design patterns in public these days (because they've internalised that modern languages have 'design patterns' built in), but if you look at something like React and think it's just JavaScript and there are no design decisions in there worthy of reifying, I think that's sad and unproductive.


My main point (just to make sure we are on the same page) was that whatever patterns are, we should also tell computers "look, here I use this pattern", and not rely just on humans to see it. Unfortunately, computers are idiot savants, so this very well might require those to be formally defined.

Anyway, I think we have an ontological disagreement here. I don't believe the "design patterns" which cannot be formally defined are useful, or even that they exist. I consider them to be just phantoms, which when you attempt to formally define them, you either understand them much better (to the point where you can see how they are related), or they disappear as something that was misconceived in the first place. You seem to think that there is something real.. fine.

Possibly, why I see it that way is my mathematical training. In mathematics, this battle was fought more than a century ago and the formalism camp won. That doesn't mean that intuition is completely useless.. just that people mostly keep their intuitions (and inspirations and analogies) for themselves, and when they communicate, they strive to be formal.

To your React example - it's not a pattern, it's a piece of concrete reusable code. We don't need "React pattern", but we need React. Just like mathematicians, we should communicate "patterns" via reusable code.


But you don't _talk_ exclusively in formalisms to other mathemeticians, do you? You don't immediately jump into a specific form of calculus upon first hearing of a concrete real world problem?

And I'm not saying React itself is a pattern. I'm saying if you wanted to create your own React-like framework, or to communicate the design decisions and mechanisms that make it different from other frameworks, there is value in extracting and naming those concepts. This way, people can appreciate, understand, compare and contrast them _outside_ the context of one piece of concrete reusable code. As an entirely throwaway and unthought-out example, you might say 'Differential Rendering' is philosophically different to a 'Mutable Document'.

But if you think all things in the design of software are reducable to reusable pieces of code, more power to you.


> But you don't _talk_ exclusively in formalisms to other mathemeticians, do you?

In informal setting (like lecture or workshop), maybe not. But when publishing, you always talk in concepts that are defined formally. That doesn't mean that you use formal language all the time, but every time you introduce some definition or term, it is understood (obvious) that you could provide completely formal definition. Also, having a formal definition doesn't preclude using motivational examples.

So no, I don't think a book like "Design Patterns" could happen in professional mathematics today. Mathematicians simply don't introduce shared vocabulary of informal definitions. But it wasn't always like this - Euler used words like "curve" without having a completely formal definition (and without worrying about it). But eventually we learned this was a problem, and I think we will learn the same about programming patterns (and that's why people IMHO largely stopped trying to name new patterns).

> And I'm not saying React itself is a pattern.

Yeah, I understand, I was just taking a rhetorical shortcut. What I am saying you should be able to point at a set of functions of React and say "these embody one of the ideas behind React" (although there can be technical reasons that prevent doing exactly that on the current React codebase). Then the "pattern" you have in mind is well-defined by these functions.

> But if you think all things in the design of software are reducable to reusable pieces of code, more power to you.

I think it must be, because the "pieces of code" is the ultimate end goal. So there must be a point where the informal definition of pattern gets translated into formal code. And to do this translation reliably, you need to be able to enumerate (or parametrize) the choice of formal options that stem from the informal concept, and this process is actually the formalization of said concept.

I would say the software design patterns are rather special case, because they involve code readable by computer. In contrast, the original design patterns in architecture don't need to relate to computer code (there is no code reuse), so they are perfectly fine with not being formally defined.


I guess I just don't inhabit this world where there is nothing more to software design than reading and writing code. I write comments, I read readmes and documentation, and I find shared, human descriptions of larger granularity design abstractions useful. ¯\_(ツ)_/¯


> I write comments, I read readmes and documentation, and I find shared, human descriptions of larger granularity design abstractions useful.

Oh, I have nothing against that, except the word "shared". I just don't think that "design patterns" (from the GoF book) helpfully contribute to that. In practice, it really depends what you want or don't want to gloss over as a technical detail, and just saying "there is this pattern" doesn't give you enough flexibility to do it.

What I am saying there is a trade off - if you decide to name something, you should define it well enough not to be misunderstood. If you can't or don't want to define it well enough, better not name it at all (except maybe temporarily), just describe your intuitions/feelings about it. The authors of "design patterns" seem to be oblivious to this trade off (although I think it wasn't really their fault that the names for patterns got so popular, I think they just wanted to have fancy chapter titles).

It's interesting though. When I describe how something works or should work, I use terms like "client", "server", "memory", "unique random number", "sort by", "find maximum", "queue", "remote call", "common API" etc. But I never found a compelling reason to describe things in names of (standard) design patterns. Why are not these terms considered to be design patterns? In my opinion, it is because the design patterns tend to be overly specific in some ways (talking about class/interface relations) without adding enough substance to be meaningful on their own.

So I don't find them useful to describe design (overly specific), nor I find them useful to describe code (not formal enough). They are just kind of slippery for both use cases.


There are downsides to abstraction, too.

The result of the urge of explicitly abstract every pattern can end up being a brittle network of dependencies. It can become really hard to adapt the formal abstractions to small requirement changes—either that or the abstractions end up byzantinely parameterized.

Or you have to spend decades doing category theoretical research to discover the platonic essence of your patterns...

Formalization hasn't really won in math communication, has it? How much math researched is published as Coq terms or whatever? Maintaining all of math as a network of formal abstractions is really difficult.

Basically the insistence on maximal abstraction seems to imply that you try to minimize the length of application source code and as a side effect you maximize the complexity of programming languages and frameworks.

By the way, Richard Gabriel discusses this issue in a chapter of his book "Patterns of Software" (free PDF available).


I think formalization and abstraction are two different things, and you conflate them in your comment.

Formalization, in my view, is about "knowing what you are talking about". So you have a perfectly defined set of basic operations on the concepts you use, and you know what you can do with them and what happens if you do; you cannot cheat by making stuff up as you go along.

On the other hand, abstraction deals with "how general I can make this thing" (AKA reusability).

What I mean by "formalization has won in mathematics" is that when you write a paper, all concepts you are using have to be formalized, or at least it must be obvious how they can be, typically through a shared context of standard theories (so you don't have to use formalism everywhere, although it would certainly be desirable and eventually I think it will happen).

But that doesn't mean you have to always use the maximal level of abstraction. It's perfectly valid mathematics, for instance, to prove a theorem just for Euclidean spaces even though there is nothing that could stop you from proving it for topologies. (And I think Bourbaki group learned about the cost of abstractions the hard way.)

The same is fine with code - if you think abstraction hurts it, absolutely go for lower level of abstraction. But it will hurt reusability of said code, no doubt.

But if you want to talk about some reusable concept, you should be able to show the code at the correct level of abstraction (or at least be able to show that you are able to show it).

So that's my beef with software design patterns - they are abstractions, but aren't formally defined. It's like eating a cake and wanting it, too. They are at best transient thoughts ("funny, this thing here looks awfully similar to that thing there, how so") before somebody should abstract them formally. And I cringe when people think that they help communication, because in mathematics, we already went through that.


I was using abstraction in the sense of "lambda abstraction", like a formally specified function with parameters.

Well, I had to remind myself what the actual design patterns "movement" is, and I get the impression that it's a dead movement with no development since the classic book and a very limited influence.

It seems that maybe the bigger problem is that there is no "theory of software" aside from a few interesting attempts like "algebra of programming" etc. That's why I can't really blame people for using software concepts that aren't formally specified.

What's your favorite alternative to design patterns, or like the path you think is more fruitful and good?


After few years of skepticism, I got convinced that pure functional programming (ala Haskell) is the right path forward. (What convinced me was that you can do the same things as in Lisp, with type checking, see http://www.haskellforall.com/2012/06/you-could-have-invented... and more recently http://www.haskellforall.com/2016/12/dhall-non-turing-comple... .)

I think having very strong foundations is very comforting, and the separation of side effects (and introducing them into the type system) has important consequences for composability. But it is still hard and work in progress.

One of the more subtle criticism I have of the classic design patterns is that they often rely on interfaces as descriptions of objects (in the general sense). But it is a very weak description, and indeed, the most useful type classes (which is comparable to interface) in Haskell actually also have laws (not expressible in the type system, for better or worse) that describe behavior. This was a point made by Erik Meijer, that really, you want to describe the objects (that your abstractions can work with) with the laws governing them, not interfaces.

So yes, I think the type classes with laws that are commonly used in Haskell (which is actually pretty much application of mathematical concepts) are the way forward, because they are both better defined and more fruitful (thanks to being explicit about laws) than design patterns.

In particular, I suspect the whole field of group theory is still waiting to be discovered by programmers; I had this idea of looking at error handling and recovery, or transactional processing, as a group operations within a certain subgroup (of valid system states). So no operation or empty transaction is a neutral element, and recovery from action or transaction rollback are inverse elements in some group. I am not yet clear whether this is helpful or trivial, and I am not really sure how to proceed with it. But Monoid abstractions are already quite used in Haskell, so why not Group ones?


I'm super interested in all that stuff, but I also feel that it's not for everybody. Like just yesterday I saw someone on the Rust subreddit who mentioned that they think Haskell is cool but it tends towards too high abstraction levels for their taste, which I totally understand.


> You can do the same things as in Lisp, with type checking

Can't easily do even simple things like variable argument functions, or polymorphic lists (required for representing source code for macro programming).

Lisp has loops, variable assignment, stateful objects of various types and so on. Also, strict evaluation for straightforward, deterministic sequencing.


I only touched that point but as I said - I used to think the same, but then I actually studied it and changed my mind. You can do all that in Haskell too, including interpreting homoiconic code and build DSLs (which was my main point), and it's all built on the more solid foundations with integrated static type checking. It just works quite differently.


I didn't mean to imply that Haskell isn't Turing complete, of course.


It's not about that. Haskell has tools to do all that, and they are better. You can write imperatively in Haskell, you can write strictly or lazily in Haskell, you can have dynamic types in Haskell. There is actually more choice.

However, one (unfortunately) needs to study it to see it. I like Lisp, but curiously enough, it has its limits too.


> "Why don't you just add these patterns into your computer language as abstractions? Wouldn't that aid comprehension even more?"

I often asked myself this question! Why do I have to say

    public class DatabaseConnectionSingleton {
        public static DatabaseConnectionSingleton getInstance (){ ... };
    }
and then get an object using:

    var db = DatabaseConnectionSingleton.getInstance();
Why can't I say

    public singleton DatabaseConnection {...} 
and just go

    var db = new DatabaseConnection();
and let my runtime make sure that I get the appropriate instance? Left aside the discussion of how useful that pattern is, I would like my language to be more powerful. I am aware of alternatives (Borgs in Python) and that many people consider Singletons an antipattern, as it helps introduce globally shared state, etc. but then, why do I have to care about how instances are implemented?

After all, in C#, foreach interacts with iterators, why not push other mechanisms down into the language in the same way?

Ideas:

    // Unit of work, commits changes before the reference gets removed from stack
    public unit CustomerForm { } 

    // There can be only one.
    public singleton DatabaseConnectionHighlander {}

    // Any method called updates values in some way
    public builder QueryString {}


Just one of many reasons that Scala is such an attractive option for Java developers:

// There is only one object DatabaseConnectionConnor {}


It's true that other languages often don't require certain patterns. Strategy and Visitor are often unnecessary when a programming language has functions as first class citizens.

I found that it often helps to remember that the GoF book was written about object oriented patterns with C++.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: