Hacker News new | past | comments | ask | show | jobs | submit login

> The big idea of message passing is that it's not about message passing, I think what Alan Kay has been trying to say all these year is that we should look at how bacteria communicate, and try to mimic that.

It is such a horrible programming model, though.

Making calls and sometimes they get picked up and sometimes not?!?

If we're talking remote processes, sure, that's understandable, remote resources are not always available.

But in-process? If I call a function on an object, I want the compiler to tell me right now if this code makes sense, and the compiler should stop me right away if that code cannot possibly work once deployed.




I don't think it's about sending a message into the void and "hoping" that a compatible agent receives the message.

Rather, it's about your object not having to know who is handling the message, only what the message is ("what" in both content and type).

It's still up to you as the programmer to make sure that you have implemented the other objects to act on whatever message types your code will pass. Just like it's up to you to make sure the method exists in the target class when the caller tries to invoke it.

Later on, if you decide to swap out your logging infrastructure for a different one, you don't need to update all the places that called the old one. You just tell your new logger to listen for messages directed at Logger.


> I don't think it's about sending a message into the void and "hoping" that a compatible agent receives the message.

Except that is the bacterial model. When you want something done, you signal for it by releasing proteins, and keep releasing proteins until you're satisfied even if the work ended up being done 50 times more than you needed. That's why biological systems so commonly overreact to stimulus. The biological method is to flood communication until whatever demanded response is met. It's the equivalent of a user mashing a button until a program responds, like the elevator button problem.

Actually, elevators seem like an extremely good analogy for this kind of asynchronous service system.

I understand we're not trying to perfectly mimic the analogy, but I think it's important to see that nature's model, while robust and asynchronous, carries significant problems due to how it communicates. It's that very robustness that we're trying to mimic, so we should expect to inherit some of the problems that go along with it.


Yeah, the biological model is much more messy. I think we can take some lessons from it while still enforcing a "God Mode" on our local machine, ensuring that a compatible process is always running to receive the message.

Though, as GP stated earlier, this does make some more sense from a distributed networking perspective, where no machine has control over the existence of its peers. In that case, having a setup where the message is sent out on a service bus to be snatched up by a compatible listener is closer to the biological analogy.


Great insight! To add to this: To counteract, one needs to implement some kind of synchronization device, which has bring plenty of other problems. It would be exiting if nature would choose a more snychronized approach if it had the choice.


But is it possible for the compiler to check that some object was actually set to wait to receive a particular message that another object was instructed to send? When coding with method calls, it is possible to accidentally leave a stub instead of method code that is supposed to produce some important side effect, say, but at least you know where to look.


In his book programming erlang Joe Armstrong talks about how erlang is OOP in the original sense. You spawn a bunch of light weight processes, and each process is like a living function with its own brain. Pony is a new similar language. Each process can only communicate with the outside world through message passing. You can easily define the semantics of sync vs async, and what to do if something fails. If something does fail, you've decoupled the error handling, and you can respawn processes, even if it was caused by a flipped bit of ram. If you're just synchronous message passing, then it should be just as reliable as java/ruby, and not much slower.

There's still some down sides in that now you can easily get race conditions on human error, you're dealing with distributed systems problems. You have to worry about defining your communication interfaces and keeping them updated, and resilient to change. Message passing has a lot of trade offs.


> Making calls and sometimes they get picked up and sometimes not?!?

I like that paranoia is the default. Once you get used to it, easy stuff remains easy and hard stuff becomes less scary.

> But in-process? If I call a function on an object, I want the compiler to tell me right now if this code makes sense, and the compiler should stop me right away if that code cannot possibly work once deployed.

You are going of a tangent; What you are refering to is a matter of typing and compiler help, and not related to the OO paradigm as thought by Alan Key. (If there is a relation, I would be happy to learn it). Put more concretely: I don't see why a Java compiler would be more warning than a typed Smalltalk compiler.


>It is such a horrible programming model, though.

It's an honest programming model that can deal with real life. A lot of modern systems converged to Kay's OOP model, except the implementations are implicit and created by continuously tripping over problems that were already solved in the 80s.

JavaScript is SmallTalk minus elegance, minimalism and orthogonality.

Web pages with JavaScript are self-interpreting data in the exact same sense Kay's objects are.

Microservices are crude, bloated objects that communicate through routed messages.

Containers are a validation of Kay's idea that sharing encapsulated objects is easier than sharing data.

Most people commenting here about SmallTalk either never watched any of Kay's talks, or simply are too dumb to understand what he is talking about.

>But in-process?

Most of the real programming problems these days are not in-process.


in-process vs. remote is as relative as "right now". It depends on the scale you're looking at/waiting for.

The interesting thing with the bacteria/biology metaphor is that the time scale variation is huge, from micro to macro.

Your compiler is life itself and the validation is survival, with evolution.

Sure, it may not be enough to be practical for computer-based stuff, but for a resilient/scalable system, that's a very interesting and enlightening angle to look at.


> Making calls and sometimes they get picked up and sometimes not?!?

It's the basis of mainstream OOP as well. When you make a method call, you only know by loose protocol what effect it has, which objects it has an effect on or even whether it has an effect at all, as opposed to manipulating the data structure directly. It is the recipient of the message that decides what effect it has. This doesn't preclude having some mechanism to tell whether the message was accepted or not.

IMO the significant difference between something like Smalltalk-style message passing and Java-style method calling is that an unknown message in Smalltalk is a run-time "error"—which is passed as a #doesNotUnderstand message to the object that invoked the original message—while in Java it can be checked statically because the object definition specifies what messages it somehow handles.

A simple one-way message passing OO architecture in C, using a global set of messages:

    #define SEND(_objp, _msgp) (((struct Object *)(_objp))->dispatch((struct Object*)(_objp), (struct Message*)(_msgp)))

    enum MsgStatus list_dispatch(struct Object *self, struct Message *message)
    {
        switch (message->method) {
        case MSG_INSERT:
            insert_into_list((*struct List)self,
                             ((*struct InsertMessage)message)->index,
                             ((*struct InsertMessage)message)->value);
            return STATUS_OK;
        case MSG_DELETE:
            // ...
            return STATUS_OK;
        default:
            // Maybe the parent implements the message
            return SEND(((*struct List)self)->parent, message);

            // or we decide that this isn't a valid message for this object
            return STATUS_NOTUNDERSTAND;
        }
    }

    // ...

    // Construct and send an INSERT message to someList
    createInsertMessage(&insertMessage, 0, "hello");
    status = SEND(somelist, &insertMessage);
Now, every object is encoded by a struct starting with an Object struct which contains a generic dispatch function pointer, so that each object can encode their own dispatch logic and handle whatever messages they receive as they see fit. In this case, the ListMessage struct also has a parent field, which it defers unknown method calls to. If it did not, it could just return a method-not-found status code. It doesn't have to know whether the parent implements the message.

Likewise, every message starts with a Message struct which contains the message type/method name. Depending on the type it can be cast into more specific messages.

An interesting aspect of this architecture is that it doesn't explicitly implement any kind of inheritance logic. You let the objects handle that themselves. A possible benefit of having any object accept any kind of message is that you decide whether it's an error that an object could not receive a message or not. Maybe you don't care that the object couldn't receive e.g. a NOTIFY_CHANGE message.


There's just not much about this that's unique to OOP. It's just defunctionalization, combined with "add another layer of indirection" to implement dynamic dispatch and something like existential types. It can definitely be useful in some cases, but the way OOP proponents frame this does not seem very helpful. Always remember the YAGNI!


It's true that there is not much about it that's unique to OOP, but that can be said of most high level programming concepts. They're just the cobbling-together of lower level abstractions.

I just wanted to demonstrate why message passing via dynamic dispatch isn't so fundamentally different from static dispatch, conceptually, and to which extent it really is "making calls and sometimes they get picked up or not" compared to any other dispatch mechanism.

Kay might disagree with this on the basis that late binding is important and that an object should not need to know what messages it will be passed at run-time; that the message is just that, a message rather than a contract. The only contract that exists is that an object should be able to receive a message, any message. But even with checked method calls like in Java I think the fundamental idea that you shouldn't have to consider what effect a method has on an object remains. What a language like Java adds is contracts like abstract classes or interfaces, essentially a way to tell the type checker what messages may be passed to an object.

I agree that this approach can be useful in some cases and also think that you most likely not only AGNI, but that it will cause serious headaches and existential dread when applied to the wrong problem. So will Java. IMO OOP is suffering a bit of a backlash not because it's a particularly bad, but because like a Swiss army knife it looks deceptively applicable to a wider range of problems than it really is.


"It's just defunctionalization, combined with "add another layer of indirection" to implement dynamic dispatch and something like existential types."

But isn't this a pretty good working definition of "Object Oriented Programming"?


> It's the basis of mainstream OOP as well.

Not at all!

If I call a.foo(), I know for a fact foo() will be called. That statement is not just going to be ignored and dropped on the floor.

What it will do, now nobody has any idea, whether the call is local or remote.


> If I call a.foo(), I know for a fact foo() will be called.

Likewise, when you send the message foo to a, you know it will be received. In both cases, the only way you'll ever have an idea of its effect, if any, is by knowing the state and implementation of a at the point of sending the message or calling the method.

What is different is how you communicate that a method might have had some effect. In Java I'll know at build time whether a method definitely wasn't considered because of its type system. It won't compile if I have not defined the method being called. In a dynamic language like Python, calling a method that is not defined results in an exception. In Smalltalk, an Object receiving a message with an unhandled method will send a #doesNotUnderstand message back to the sender. The default behavior of an Object receiving the #doesNotUnderstand message is to raise an exception. Unless you explicitly override the #doesNotUnderstand method, this becomes an implementation detail and really isn't much different, especially compared to dynamically typed languages with OOP facilities.

So as I said before, the message passing mechanism doesn't preclude communicating whether it was handled. Messages get ignored if that's what you want. Java IMO really has the same potential for run-time uncertainty thanks to exceptions.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: