Hacker News new | past | comments | ask | show | jobs | submit login
Swift Distributed Actors (swift.org)
168 points by samcat116 on Oct 29, 2021 | hide | past | favorite | 128 comments



This seems like a super interesting feature to have as part of a language. I hope Swift can eventually become more of a general purpose language, as its actually pretty nice and they're introducing features like this that clearly targets server-side programming. However it seems right now its tied to much to just iOS development and Xcode, which doesn't make for a great experience using it for other things.


> I hope Swift can eventually become more of a general purpose language, as its actually pretty nice and they're introducing features like this that clearly targets server-side programming.

I hope for this too. I did a personal project in Swift on Linux and found that I loved the language. However, I was greatly frustrated by the lack of libraries. For Swift to reach the mainstream, IMHO, the Swift Foundation needs to move much more of the ecosystem to platforms other than Apple's. I would vote for that even ahead of adding new features to the language.


> MHO, the Swift Foundation needs to move much more of the ecosystem to platforms other than Apple's.

I suspect Apple is trying to do this for straightforward reasons like wanting to remain relevant and foster community growth as to retain credibility… while also subtly trying to not encourage the use of Swift as a language for any other GUI framework/platform than Cocoa + macOS. Obviously I don’t have any evidence to support my fantasy speculation (well, X11 for macOS is dead) - but consider how Apple is so very protective of their UX moat (and PWAs be damned). It’s not in Apple’s interests to facilitate and support any kind of true cross-platform UI framework: remember how Java AWT/Swing apps were like on OS X? The majority of dispassionate (I.e. not Panic Inc) devs are never going to optimise a cross-platform UI for macOS because the whole point of using an xplat UI is so you don’t need to spend time on platform-specific code. Those two reasons combine form a vicious-cycle negative feedback loop that devalues their platform.

But with the web finally becoming the true, real, cross-platform that Microsoft was then-irrationally terrified of 25 years ago, I don’t know what Apple’s strategy is to remain the best provider of the best UX when everyone is going to be using Chromium in some form. And if Apple keeps pushing their own native apps devs to their hostile App Store while everyone else moves to the web, then the value-prop of an Apple computer looks bad compared to a Chromebook which is essentially just-as-locked-down and just-as-capable, just without iOS integration and a modest selection of (albeit) high-quality boutique Mac App Store apps.

————

On a related note: Apple does not have long-term support commitments as a matter of policy - which means enterprise/slow-moving/blue-chips won’t be interested in using Swift at all (compared to ISO-backed C++, Oracle’s Java, or MS’ .NET). Apple is going to have to maintain Swift’s support for Win32 and *nix in the long-run if they need to build their credibility and win-over large programming projects - but it’s just so against their corporate culture. It reminds me of their middling support for Safari for Windows. I know Apple gets Win32 and nix support for Swift “for free” by piggybacking off Clang+LLVM, but that’s basically the same as making Swift more like Java (“xplat for free”) but that’s just the bare-minimum they need to do: without Apple actively investing significantly in the developer experience for non-Apple platforms it just isn’t going to take-off. And they can’t do that for fear that a critical-mass of hackers will port Cocoa to Win32.

Swift is basically analogue to Google’s Dart+Flutter but it wants to be like Go.


My understanding (and I invite correction if I'm mistaken) is that the Swift Foundation was created to address the principal concerns you've articulated. But ports have been slow: Linux works but with the previously mentioned lack of libraries, and there is a threadbare port to Windows which barely runs.

The app I was working on was CLI-based that generated a log; and even for that non-UI use case, basic libraries were effectively missing.

It's a shame because it's truly an elegant language.


> is that the Swift Foundation was created to address the principal concerns you've articulated

Swift does not have an incorporated "Swift Foundation", instead Swift.org's about-us page is surprisingly frank about Apple's control over the language and ecosystem ( https://swift.org/community/ ):

> Apple Inc. is the project lead and serves as the arbiter for the project. The project lead makes senior appointments to leadership roles, with those leaders coming from the worldwide Swift community of contributors

And the site's footnote reminds us that Swift.org is not separate from Apple, but again, owned by Apple Inc.

--------------

On an unrelated note, the .NET Foundation had an upset a few weeks ago: https://www.theregister.com/2021/10/11/dotnet_foundation_com...


It's made quite clear every summer that Apple has a stranglehold on the language steering committee. When a new language feature that just so happen to enable a closed-source Apple framework drops out of the blue and then goes through a retroactive pseudo-open review.

Frankly, this distributed actors thing is more of the same -- it's something that someone at Apple thinks is important, so, despite it being just a library, it gets active support from the compiler and OS teams. There's absolutely no way that an outsider could have gotten anywhere proposing anything like this.

I like Apple's platforms and enjoy working with the language as well, but the openness is rather a charade.


Distributed actors don’t actually have compiler support. They use a standalone code generator much like GRPC. The plan is to call the generator automatically during a build process using upcoming SPM plugin support.

EDIT: It looks like there is a plan to remove the SPM plugin and make it a first class feature. This work just started and is going through a normal evolution process.


> Distributed actors don’t actually have compiler support.

I said from the compiler and OS teams -- there are at least two compiler engineers listed on the proposal.

But this is insupportable nonsense anyways: `distributed` is a brand new keyword, there are new rules about how inits work, magic property behavior... I haven't followed the proposal carefully, but there are also new toolchains being generated to enable the feature.


But with the web finally becoming the true, real, cross-platform that Microsoft was then-irrationally terrified of 25 years ago, I don’t know what Apple’s strategy is to remain the best provider of the best UX when everyone is going to be using Chromium in some form.

This certainly seems to be the case for desktop apps but not so much for mobile. Personally I'm rooting for the mobile web but the momentum seems to be on Apple's side right now.


> X11 for macOS is dead

I don’t think this is nefarious. X11 is a klunky GUI, at best, compared with pretty much everything else out there. It really can’t hold a candle to more developed GUI systems.

I was never interested in Swift as a server language, but that’s mainly because of the “chicken-and-egg” thing, as it was not supported by server infrastructure. I love the language, itself.


> I don’t think this is nefarious. X11 is a klunky GUI, at best,

That's the problem: I see you're being dismissive of X due to its ergonomics and aesthetics despite the fact that X just-works and is still a vital component in computing: the same reason Windows retains both cmd.exe and PowerShell - I feel that your kind of reasoning is why we think Apple is so capricious with their consumer products (removing iPhone headphone ports, deliberately making HomePod incompatible with Android and non-Apple Bluetooth devices, the Trash-can MacPro, etc) they're roistering their own kind of paternalistic computing vision on their users).

People that need reliable, dependable computing experiences need to be able to use all the tools out there: old-but-good, tried-and-tested, no matter how "clunky" they are - and this doesn't neatly fit into the Apple way(TM). While I appreciate and respect Apple immensely for significantly and successfully driving the standards for user-experience and overall product industrial design, it isn't perfect... if only they'd tone-down their institutionalist corporate arrogance.

> compared with pretty much everything else out there. It really can’t hold a candle to more developed GUI systems.

X is a windowing system, it isn't a complete GUI system. Consider that, in fact, what Apple and Microsoft have both done quite successfully since the very beginning of macOS and MS Windows is how they have completely hidden the fact that widget/control libraries, the window-manger, the windowing-system, the compositor, are all completely separate and independent components that actually have no dependencies on each other at all. Having that visual and design consistency between those functionally separate components is what made Windows 95, and MacOS 8 (and especially OS X) so usable and approachable to _normal people_.

Now as X and Wayland are just one component in an entire GUI system, there really isn't a strong argument in removing vital components from their own systems just because they just don't like the way it looks - as though Apple is trying to avoid looking like a HP-UX CDE desktop from the mid-1990s.

I'm rambling, it's 3am now argh.


Is it Apple’s responsibility to maintain X on the Mac, though?

If they want to implement something that may interfere with X (usually they do this kind of thing because security), should they cripple the experience for half a billion users, to appease a small minority of techies?

I’ve been writing software for Apple devices since 1986, which means that I’ve had lots of rugs pulled out from under me, and have been incandescent with rage at Apple, many times.

That said, I’ve never regretted choosing it as my preferred platform, despite thirty years of insults and put-downs from other tech folks.


> Is it Apple’s responsibility to maintain X on the Mac, though?

Yes and no.

So POSIX and the Unix Specification(TM) are standards thanks to the US federal government putting their foot-down and mandating that major contractors (like IBM, Sun, HP, etc) stop being incompatible with each other - and in-practice this also means having to support X.

When OS X launched Apple loved to tell everyone that it was a true Unix(TM) system - (after-all, they did spend the money to get it certified as a real Unix system), and this meant that OS X-based systems were suddenly candidates for major US and international scientific and research contracts and gigs - which dovtailed nicely with the hype around their boasts of the PowerMac G5's literal supercomputer performance (and indeed: literal supercomputers comprised of racks of G5 machines in a bunch of US Navy Labs).

Now, in the 2020s, it's clear that since the iPhone dominated their revenue that Apple is no-longer really interested in being part of government supercomputing and scientific computing contracts - the positive PR and technical credibility they gain from all of those government projects is insignificant compared to the cash revenue value of simply running another iPhone ad-campaign. The 2013 Mac Pro was solid proof that Apple does not value the high-end desktop and workstation computing markets anymore. Apple seems content to even let Windows take away their previously impentrable hold on the desktop video-editing market: whoever was using FCP on a MP in 2006 is almost certainly using Adobe Premier on Windows today.

...having said all of that, I was surprised to see that macOS 12.0 Monterey is still an official Unix(TM) certified product: https://www.opengroup.org/openbrand/register/brand3673.htm - despite Apple's deprecation (and obsolescence?) of their support for X.

-------------

So my point, if anything, is that if Apple wants to maintain its credibility as a high-end computing systems provider, it needs to revert back to doing far more than paying lip-service to being a Unix vendor - and that includes bringing native support for X (and some form of compatibility with the rest of the Unix/Linux GUI ecosystem) back into macOS - but if Apple continues to remain uninterested in selling themselves as a professional workstation vendor (which I feel is implicit in how they've been locking-down macOS and their Apple Silicon-based machines) then you're right, they don't have any responsibility to maintain compatibility with X.

As it is, I don't know what Apple wants - or even what Apple would do in the event they lose their cash-cow and grip on the world when the iPhone is somehow obsoleted by a competitor. Of course if that happens then they'll quickly reverse-course and try to make-up for all of their substandard devX moments.

-------------

> That said, I’ve never regretted choosing it as my preferred platform, despite thirty years of insults and put-downs from other tech folks.

Pardon my asking, but I'm especially curious - what kind of software do you write? Unless you work for major creative-tools companies like Adobe - or a boutique Mac shop like Panic nor indie Mac App Store dev - I have trouble imagining what market there is for other types of software on the Mac. Whatever native macOS / Cocoa line-of-business software there was over the past couple of decades will have certainly transitions over to being a SaaS web-app by now.


> So POSIX and the Unix Specification(TM) are standards

No one fully supports POSIX.

Aside from server systems like AIX or HP-UX Apple is literally the only company that has a POSIX-certified consumer OS (OS X since 10.5 Leopard [1]).

> So my point, if anything, is that if Apple wants to maintain its credibility as a high-end computing systems provider, it needs to revert back to doing far more than paying lip-service to being a Unix vendor - and that includes bringing native support for X

What's "high-end computing systems provider"? Apple is doing really fine as is. Why should they do "far more", and why does this "far more" specifically includes support for X? They supported X for years (OS X 10.2 to 10.7), and then decided it wasn't worth it (and it isn't).

[1] https://www.opengroup.org/openbrand/register/index2.html


Oh, I won’t get into it. That’s a rabbithole that is best left to pocketwatch-bearing wildlife.

Suffice it to say that a few folks (and I don’t claim to have a huge fanbase) seem to like the stuff I write.

As fas as SaaS… Well, let’s just say that this is a familiar tune. That ol’ “thin client” zombie just keeps lurching out of the grave…


>remember how Java AWT/Swing apps were like on OS X?

Imagine if Apple /Microsoft would invest some dev hours in Java/Qt so it keeps up with the updates they put out, we could get decent cross platform apps but Apple/Microsoft don't want cross platform apps/games.


I never liked X11, its API is much worse than any complaint one can imagine over Win32.

It is no surprise that the only UNIX clones that had a good desktop user experience always did their own thing instead of pursuing the X11/Motif path.


The architects of IRIX would like a word with you.


The ex-architects of a dead UNIX, most likely busy with their macOS or Windows laptops in 2021.

Really curious what success story they are about to tell me.


I meant that at-the-time (in the 1990s), IRIX had a cohesive desktop experience. While it wasn't as pretty as Windows 95, it was certainly usable and not unpleasant to look at.


4Dwm was just another Motif derived desktop with all the technical downsides I was referring to.


I hope not, at least until they solve it being Apple-only. Of course, there's some support for other platforms but it's pretty bad. At this point it's better to invest time and money to the Rust ecosystem.


I think the language and runtime are evolving too fast for good support of other platforms outside of NIO based apps running in a Docker container. This should change once Swift has all the features originally planned for it. They are getting close, but there are several areas that need work. The biggest is an ownership system. But there is also variadic generics, extensions to the pattern matching system, c++ bridging, completing some missing concurrency features, SPM features, and lots of optimizations. Cross platform support is mostly for early community building right now. Much of it funded by Apple directly. Maybe around Swift 7 or 8 it will be better suited for other platforms.


> Konrad Malawski is a member of a team developing foundational server-side Swift libraries at Apple, with focus on distributed systems and concurrency


If I am not wrong he also worked on Akka, the distributed actors library for Scala.


No, you are not wrong: https://akka.io/team/ (check the Honorary Members section)


Yep, absolutely! He did fantastic work for Akka. Didn't realize he had left the Scala world, but good for him for taking the vision to other ecosystems.


Right, you will never get your hands on a newer Swift compiler without also installing a whole new macOS with it, which is unacceptable for me. It's a shame, because Swift isn't a bad language.



My experience is the opposite: using Swift for Apple ecosystem development basically removed all of the benefits that Swift the language offered. It’s clearly designed as a general purpose language, and its adoption mostly curtailed by this reputation.


I’ve been using Swift to develop Apple software since 2014. I’ve released quite a few apps, written with it. Before that, I used ObjC. I’m extremely happy with Swift.

As far as adoption goes, many large codebases are likely still ObjC; simply because of hysteresis. Major architectural shifts take a lot of time, money and risk. That’s one reason I don’t feel that much urgency to jump on the SwiftUI/Async stuff (but I like what I see). I’m interested in shipping, and shipping is almost always a couple (or more) steps back from “the bleeding edge.”

In my experience, there’s not much of a downside to waiting for tech to mature and stabilize, other than not having jargon on my CV.

Swift was unusual for me. I started learning it, and releasing apps in it, right away. It was a gamble, and it paid off. There’s lots of folks that are probably just getting started with Swift, now.


In my last job until I retired this year, I spent 5.5 years writing only Swift at a large (not FAANG) company. It took the entire mobile team about that time to retire most of the Obj-C codebase(s). Swift is by far my favorite language despite having spent decades with C/C++/Java and Obj-C.

I am still writing Swift despite being retired but as part of my art generation work now.


> I am still writing Swift despite being retired but as part of my art generation work now.

I've been "retired" for four years, and work more, every day, than I ever did, as a grunt.

I write Swift 7 days a week. I really like the language.


> I am still writing Swift despite being retired but as part of my art generation work now.

Anything publicly available? I’m always looking for more generative art!


twitter @digconart

not selling yet, but there is lots there


To be clear, I quite like Swift and did from day 1. Granted I’ve only tinkered with it. What I didn’t like was interacting with Cocoa APIs. I found it difficult to use Swift’s protocols and structs in a FP style, because Cocoa isn’t designed that way.

I imagine this has gotten a lot better since, particularly with stuff like Swift UI. That said, I’d personally find Swift more compelling to use as a server language.


Cocoa uses classic MVC, and actually started from NextStep (last century). UIKit smoothed it out a bit, but it’s still MVC/OO, under the hood.

SwiftUI was developed from the ground, up, to use FP concepts, and things like Protocol-Oriented Programming.

I’m looking forward to learning it, but the fact that most of the Apple world runs on tech from a design that was developed in the 1980s, should be an object lesson in the structural integrity of “classic” OO, and the MVC pattern.

We rubbish the past at our peril.


> We rubbish the past at our peril.

It’s not as if functional programming is newer or untested. The reason most of the Apple world continues to use these paradigms is because the codebases which do continue to exist. Inertia alone would keep ObjC around for another 20 years, and the software industry is almost totally rewrite-averse.

Anyway, the rest of your comment is exactly what I’m talking about: Swift and its value propositions just don’t match up with the things it has had to interop with. I’m hoping Apple actually dedicates resources to SwiftUI documentation and adoption, and that gradual uptake is enough to stop worrying about fitting Swift to highly stateful OOP APIs that weren’t designed for it.


> It’s not as if functional programming is newer or untested.

It's not about maturity. It's about suitability for the purpose.

FP is probably not ever going to be easier to use, but it is safer, and can be used to safely implement algorithms of greater complexity than other paradigms.

It needs to be coupled with a reactive, event-driven system (that has state, but the state can be hidden in end nodes or a model), in order to deliver a great UX; which, at the end of the day, is what this is all about.

You can't have UX without state. It's all about where the state is maintained, and how many copies of it need to be made, as the program executes.

For example, say we have a checkbox that reflects/controls the state of a variable in our calculation. The on/off state is important. If it is on, let's say that we will introduce a compensating coefficient. If it is off, that coefficient is not applied to the function.

The state is not a part of the function, so it doesn't make sense to incorporate it (or even knowledge of the existence of that state) into the function.

Nevertheless, it is a crucial component of the user experience, and needs to be maintained somewhere.

The best bet may be to try to keep the state in the checkbox object, itself, and query it at runtime, or use a messaging system to be informed of state changes. I've always found a central model to be annoying. If the UX is at all complex, the model can get crazy, and turn into a richly productive bug farm.

Federated/franchised state can have its own issues, like when there's a lot of interaction/dependency between state nodes. That can get crazy, too, and difficult to debug.

So far, no one has come up with a perfect pattern, but I kind of like the idea of a "composer" pattern, that queries federated states, and "weaves" them together, at runtime, as opposed to a model that sends directives out to a set of nodes.

The issue that lots of folks have with OO, is polymorphism. That is not really the same thing. It can make discovering an algorithm difficult, as functionality can cascade through a whole bunch of levels, and debugging can be a real pain in the butt. If anyone has had sticky CSS specificity issues, they have an idea of what it can be like to grope around in a polymorphic codebase.

Protocol-oriented programming is not a panacea, either. This is especially true, when mixed with polymorphism. I wrote about a strange issue I encountered, here[0].

But polymorphism isn't bad. It's just a tool, and we don't use a nailgun as a hammer. Like so many tools, people have tried to shoehorn it into unsuitable configurations, and refuse to consider other, more suitable tools. I see FP proponents, doing the same thing.

Swift is nice, because it has a rich toolbox, allowing us to select the right tool, for the right job, without having to do complex linking games between libraries of different languages.

[0] https://littlegreenviper.com/miscellany/swiftwater/the-curio...


Woah, very cool - I guess Distributed Objects is getting a facelift and making a comeback. [1]

What's old is new again, as they say.

[1] https://developer.apple.com/library/archive/documentation/Co...


DO was super cool... Made things very easy.

That said, I guess Swift is trying to do a better version of DO.


RMI did it first, actually.

https://en.wikipedia.org/wiki/Distributed_Objects_Everywhere

> By the time DOE, now known as NEO, was released in 1995, Sun had already moved on to Java as their next big thing. Java was now the GUI of choice for client-side applications, and Sun's OpenStep plans were quietly dropped (see Lighthouse Design). NEO was re-positioned as a Java system with the introduction of the "Joe" framework, but it saw little use. Components of NEO and Joe were eventually subsumed into Enterprise JavaBeans.


So Swift is pretty much dead in the backend space and somehow they're pushing actors in the language?

I just don't understand the vision.


Why do you think it is dead? I have always thought that it is incomplete as a language, just give it time. Swift seems to be my cup of tea after years of Java and Typescript. I would really like to learn about these new features and apply them in a backend scenario. Julia is another language that did not catch up as people expected (especially the data science community) but I enjoy using it for my personal projects, that does not mean it is dead.


> Julia is another language that did not catch up as people expected

Wait what? Julia has been growing very consistently for years now. I don’t think anyone with a reasonably informed opinion thought Julia was going to take over the whole world in the blink of an eye.

It seems to be going right along the lines that most people I know in the community expected.


How is the Swift experience outside of the Apple ecosystem? And how does it compare to other languages like C# and Java?


Pretty bad. In my last company we tried to get server-side swift integrated into a backend microservice.

While there is big-company momentum behind swift on servers, it’s always a second-class citizen compared to its own native platforms (iOS & macOS).

After IBM stopped backing it, pretty much killed credibility in server-side swift.

We switched over to Rust fairly early on, and we were glad we did.

I think the realization was that client-side app engineering is just different from distributed back-end systems.

On iOS, we have a very rich set of standard toolkits we can consume. So development is fast and streamlined. Battle-hardened.

But with swift on servers, you’re reinventing the wheel for even the most basic things. So it may very well be a completely different language on the backend at that point.


For web backend, I like Swift better as a language, but the ecosystem and the performance are not really here. The compilation times are also a bit annoying.


I was perusing the implementation of String the other day and it’s intrinsically linked to the ObjC runtime to allow toll free bridging. The multi platform story for Swift is non existent.


This is only true on Apple platforms. String, of course, exists on all platforms. Of all the issues Swift's cross platform story has, this isn't one.


Where is the side to side comparison of features and to-do between this and existing distributed actor implementations, such as erlang + otp or java and whatever thing it uses?

This approach of ignoring the rest of the world and explaining Swift concepts as if they exist in a vacuum is frustrating and counter-productive.

Even if this were a totally new feature, I'd expect links to research papers.

It is one thing to present end users with 'shiny new iphone' every year. It is another to treat developers like idiots and have every swift related announcement be this sales pitch of 'great new feature' when in reality they are all features other languages have had for decades (and are thus mature, while Swift is always half-baked and full of gotchas) and this is yet another re-implementation of what already exists with undocumented arbitrary differences that people then have to spend time to figure out (aka free beta test) when I'm sure the people who implement this stuff (hopefully) know full well what trade-offs they are making.


I skimmed through the blog post, and it seems to me that they are not claiming they invented something new, just that they are adding this to Swift language.

(Distributed) actor model is nothing new, but I definitely wouldn't classify it as something that's common in other languages. To my knowledge Erlang/OTP is the only mainstream language where this is part of the standard library.

There are implementations for other languages, such as Akka for Java/Scala and Orleans for dotnet, but most languages (sadly) don't have a battle-tested implementation of this.


> most languages (sadly) don't have a battle-tested implementation of this.

I’d argue that this is because it’s a failed idea that conflates OO encapsulation, concurrency constraints, identity, serialization, and networking into a behemoth that does none of those things well.


> Orleans for dotnet

Also Akka for .NET and my echo-process library

[1] https://github.com/louthy/echo-process


They are adding features they 'took' to put it kindly from other existing, mature implementations without giving any credit or acknowledgement.

Not only that, they expect people to test their beta grade software because they keep releasing it half baked ever since the release of Swift itself.

It's embarrassing and it also angers me because these companies have wall gardened their ecosystems to proprietary languages and tools that all do the same thing poorly, differently and in ways that prevent sane cross platform development, on purpose, this being yet another apple-only 'thing'.

Here's Swift's roadmap: copy C# and other languages for the next 5 years while making sure you can't write cross platform code by adding arbitrary garbage like SwiftUI or this 'distributed actor' implementation that will surely differ and not be compatible with existing features of other languages and their tooling.


Gosu's roadmap... copy .NET and Groovy

Kotlin's roadmap... Copy Gosu and Groovy

Java's roadmap... Make Gosu, Groovy and Kotlin redundant by supporting the idioms of Gosu and Kotlin eventually and force those languages to 'nativize' their underlying implementations.

Manifold's roadmap... Ditch Gosu on the JVM and make a java library, copy all the stuff Java doesn't' have a plan for like .Net's (and by extension Gosu's) class extensions and java 8 to 9 workarounds

We could go on and on. Nothing new under the sun. Great minds think alike and at the end of the day the value of delivering a product is what matters in the end.


Ah yes, that's exactly what the slave owners said before the civil war - look at the product we are delivering, and everyone else's doing it too!

Their argument was so compelling that we decided to keep having slaves.


That might be the worst analogies I've ever heard. Humans extracting value asymmetrically out of captive humans has nothing to do with new syntactical features and/or sugar of other languages that language developers are inspired implement to implement in their own language because they solve modern workflow. And those developers are well compensated for for their contributions being by monetary, notoriety or self satisfactory means.


This blog post is an announcement for a wide audience. In the proposal[1], which gives detail on motivations and implementation, there is an "Acknowledgements & Prior Art" section that mentions Akka, Orleans, Erlang, and Elixir.

[1]: https://github.com/ktoso/swift-evolution/blob/distributed-re...


Ah yes, they acknowledge it in a sentence down on page 37 in a document only people paid by apple read top to bottom.

Thank you. That really changes things.


> without giving any credit or acknowledgement

You could be forgiven for not being familiar with the Actor model. But if you’re familiar with it, saying that isn’t recognition is bonkers. Everyone who knows what Actors are knows they’re referencing Erlang/OTP.


The model even predates that. It’s like one of the OG concurrency ideas IIRC.


The Actor Paradigm is much more widespread with offerings from Microsoft, Lightbend, etc.


Question for folks: where do server-side Swift folks hang out?

I've been writing server-side / enterprise code in Swift for about a year now, and the language has really grown on me.

But it's a small community, and IBM's & Google's departures were a big blow to the feeling of inevitable growth to it.

I'd love to find a watering hole to chat with others in the same small niche.


Here's an invite for the main Swift server Slack that's not just Vapor https://join.slack.com/t/swift-server/shared_invite/zt-5jv0m...


I understand that there's a public Slack group for server-side Swift. I do not know the invite URL, but you may be able to find it in a search?


You might some likeminded people in the Vapor Discord Server [1]. They've got channels for all the budding Swift tech.

[1]: https://discord.gg/vapor


Nice to see swift trying to make a dent in server programming using shiny new features, however i would have liked something on the android side instead.

Being able to reuse code between ios and android, including foundation api, would have made an easy way to grow the user base.

it's going to be hard to put swift in the hands of go or kotlin devs..


There was once a really nice open source cross-platform framework called Apportable that provided an Obj-C compatibility layer on Android. You could develop your app on iOS and then last moment build it on Android with some minimal changes. I tried it for one of my projects and it worked just great.

Guess what happened to Apportable? It was quietly acquired by Google and completely shut down within days, with practically no traces left on GitHub or the rest of the Internet.

This is because Google will be hostile to any compatibility layer that places iOS in a privileged position and makes Android a second-class citizen, an afterthought in your development process.

With this story in mind, I don't think Apple would step into that territory.


I don’t see how they would block apple from providing a good experience without crippling the whole NDK community


NDK is already crippled by design.

https://developer.android.com/ndk/guides

> Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.

> Reuse your own or other developers' C or C++ libraries.

Using this set of public APIs:

https://developer.android.com/ndk/guides/stable_apis

Anything else is going into the "works on my device dunno about your's" territory.


So wasn't this part of the original dream for Smalltalk? Then this would seem to be coming full circle: Smalltalk, Objective C, Swift and back. I know some of you are Alan Kay fans so maybe you can comment on that aspect.


Yes, but only a part. It’s not just about having something akin to actors for concurrency and execution. It’s also about having a dynamic system - one that can update its behavior not just by manual request, but automatically as the system is running.

One way to think about this is propagators. I’m still learning myself, but a compelling example is lisp. With lisp you can write macros that essentially allow you to treat your code as a tree and arbitrarily modify that tree (aka arbitrarily write code). You can then compile that code while the system is running and execute it. It’s not just about macro expansion at startup or a single compile time step at the beginning of execution. The system can be designed with this in mind.

It’s also about introspection, the ability to ask questions about the system at runtime as it evolves.

Sussman and Kay both talk a lot about DNA and biology, and the ability for systems to dynamically expand, change, and repair themselves.

When I think about this kind of stuff nowadays I picture something like lisp with an execution environment like the BEAM (so basically LFE) and an introspection system powered by a declarative constraint solving query language (something like Datalog-style RDF found in things like datalevin and it’s predecessors). I think that lends itself really well to these kinds of systems, including another point that Kay talks about pretty frequently. The ability for two systems (and in our cases two actors in one environment count) to negotiate with each other via some shared fundamental language to understand each other’s purpose. SOP style approaches seem like a compelling way to do that, but the main problem to me is identifying entities as globally unique as part of that negotiation process.

Also don’t listen to me, I’m a monkey.


Probably can't make much progress without a scientific foundation including the Actors Abstraction.


Absolutely. I was watching Alan Kay's "Inventing the Future" talks recorded a few years ago, and this was a thing he referred to in passing.

In recent month's, every time I run across the whole "OO is evil/dead/horrible" meme (esp. common here) it turns out to be someone talking whose experience of "OO programming" is limited to C++/Java and their relatives. If your only exposure to OO was those (and similar) languages, then, YES, OO is horrible and weak and full of leakiness in the abstractions. It's a bit like saying that coffee is horrible when all you've ever tasted is cheap-and-nasty instant made from acorns and floor sweepings. C++ and Java (I can't comment on C#, never having used it in anger) are the palest, anaemic shadow of OO programming.

Rant done. This addition to Swift might cause me to look at it more closely. (I don't have any reason/desire to touch the Apple grow-tunnel (as opposed to ecosystems, which are natural and open.) My thoughts have also recently run along lines of "What if we made every message to an object as if it were always remote (in another address space)?" In other words, for decades we've been trying to make RPC calls look (to the programmer) as if they were "just like local calls" and with justifiable lack of success because remote really is more wrinkly than local. So what if, instead of trying to make "remote" look/behave/imitate "local" we went in the other direction and make all calls look like remote calls? It means we have to think about all the failure modes and partial-failure modes of remote calls all of the time. Only, for local address-space calls, they're going to be much more reliable in practise. Is it worth it? Can we bring down the cognitive overhead to an acceptable level?

I'm interested to see whether this Swift effort/initiative approaches this idealised/simplified description.


You are describing Erlang and it can not get simpler than that.


I fondly remember Joe Armstrong, who was a good friend.

However, not having a mathematical foundation has been a

severe limitation for Erlang.

The Erlang community is trying to dig its way out with

modifications and extensions. Still have a long way to go.

Of course, no Actors framework is perfect.

Many important implementations have been done Erlang.

Also, significant Erlang projects are ongoing to good effect.


not really, smalltalk's "actors"[0]/objects are organized around data encapsulation principles. Erlang's "actors"[0]/processes are organized around failure domain principles. It's not unreasonable to go in one direction (making your data encapsulation match failure domains) but if you try to organize Erlang's "actors" around data encapsulation you will wind up with very shitty code (hard to read, hard to test, hard to debug, unnecessary performance regressions), unsurprisingly, because that is not what they were meant to be.

[0] I put actors in quotes because neither are true actor system, either. Nor were either designed to be, though if you squint they look similar.


See the following for a rigorous definition of the Actors Abstraction:

https://papers.ssrn.com/abstract=3418003


Am I correct in thinking that this is similar to "Session Types"?


Given the heritage I’d also say it’s good to recognize the Actor model in Erlang/OTP and the overlap that has with Smalltalk and everything that followed.


The Actors Abstraction is foundational for computer science

analogous to the Natural Numbers in classical mathematics.


There were a lot of researchy implementations:

"Writing Concurrent Object-Oriented Programs using Smalltalk-80"

http://www.wolczko.com/mushroom/compj.pdf

"Actra = A Multitasking / Multiprocessing Smalltalk"

https://dl.acm.org/doi/10.1145/67387.67409

Actra -- an industrial strength concurrent object-oriented programming system

https://dl.acm.org/doi/10.1145/127070.127090


This is awesome, I'm always looking for more competition in the distributed actor model/system space.

So now we'll have Akka, Elixir, Erlang and Swift as options. Pretty exciting!

edit: Rust's Actix too!

edit2: jk Actix is only local.


I’m the pessimistic opposite here; I’m a little shocked to see this largely failed idea be dragged back out of the history books again, a good decade-plus after the last major failed attempt.

We’ve gone through this so many times, and — outside the very specialized Erlang — the result is always the same. It doesn’t work.

OO Actors are a terrible concurrency model, and an even worse networking, protocol definition, and serialization model.


I've used Akka extensively and beg to differ. It's such a clean departure from traditional concurrent programming and the "let it fail" mentality around it is real.

The distributed features work as intended to great effect with only certain edge cases (split brain for example) that need to have care but at that point in scale, you're already leveraging all the best parts.


I'm wearing an Akka shirt right now! Agreed. ;) Not my realm, but a friend who was way into SDN systems told me some about how Akka powers a vast vast amount of that work (OpenDaylight), extremely well; I could use a refresher on that, it was such a great story to hear! Another example, Microsoft has done fantastically with Orleans[1].

I'd like to propose an actually dead idea: calling ideas dead. That we fail with tech ideas & abstractions is 99.%%% of the time not really reflective of the idea itself. There's survivorship bias- well, I think we need a term for the opposite: didn't work bias. A lot of technologies are dogged by image problems from less than successful pasts. Too often we pop culturally & en-masse accept that specific failures imply general unsuitability, and become fixed, hostile, & negative towards ideas. We should label this phenomenon & understand it better. "Didn't Work Bias" doesn't quite feel like the best fit or reciprocal to Survivorship Bias but it's a start at capturing these prevalent anti-idea sentimentalities.

[1] https://dotnet.github.io/orleans/


Erlang nailed the actor model.


Erlang incorporated some of the ideas of the Actor Abstraction

but missed a few things. See

See article and video here:

https://papers.ssrn.com/abstract=3603021

https://www.youtube.com/watch?v=AJP1VL7shiI


Though it's nice to have so many choices, devs that really want actors will pick a battle-tested platform like Erlang+OTP in the 99% of cases. Because of the natue of BeamVM, which also got a JIT upgrade recently.


Also see offerings from Lightbend and Microsoft.


Seift Actors is created by the Akka creator, he's been at Apple for a while now


Yep, they (ktoso) are the author of this article. I think they were just a lead dev though, not sure if they actually created Akka.


is Actix distributed?


Actually no, I misspoke. Looks like they're just local.


I think there are crates that enable it, but like via 3p stuff like redis (but I don't know enough about rust to be certain)


if making such things a language feature is what you need to do then fine, but for a lot of languages with good meta (like javascript's proxies) one should be able to build things with similar ergonomics & capabilities without having to expand the language.


Swift Actors are an important contribution to the Actor Paradigm.

There is is an upcoming keynote address on the Actor Paradigm at the Linux 30th Anniversary Conference. See

https://reactivesummit2021.sched.com


This sort of language cruft makes me realize that Lisp had it right all along.


Yet somehow the Lisps seem to be doing all sorts of concurrency rather poorly.


Not necessarily: https://lfe.io/

Lisp Flavoured Erlang! ;-)


    (@clojure would like to have [a few words])


Beside Clojure, Common Lisp has nice library addressing this issue — and of course the commercial version have their own too.


for a niche language swift seems kind of bloated. Why do you need a special actor keyword? isn't it just object with a global/distributed registry or something?


How does security work with this? It looks like the actors can talk to each other over the network, but I don't see any means of authentication.


Just like in Erlang, you would handle that at the transport level, and it appears Swift supports providing your own transports. They didn't mention (or I didn't see) whether the transport they are shipping has any specific auth wrapped around it though.


I would name the "distributed" actor and func keyword "remote" instead. It's shorter and more understandable.


Even in traditional Actor model systems processes aren’t remote by default. They just allow you to reason about remote concurrency the same way you reason about it generally.


The Actors Abstraction does not require specification of

computer boundaries.


Unless I’m missing something this is exactly what I was saying? I’m open to the possibility I’m missing something, but my understanding is that Actor boundaries are messenger boundaries and they can be remote, or local processes, or event loop routines, or plain local functions with no real concurrency. The abstraction/model only enforces that communication between them is consistent no matter the actor’s environment.


There is an existing, influential, paper called "A Note on Distributed Computing" that hugely influenced Java (and other) languages and frameworks that argued precisely against abstracting away remoteness.

The Swift team seems to be challenging that thesis:

"[U]nlike other concurrency models, the actor model is also tremendously valuable for modeling distributed systems. Thanks to the notion of location transparent distributed actors, we can program distributed systems using the familiar idea of actors and then readily move it to a distributed, e.g., clustered, environment.

With distributed actors, we aim to simplify and push the state of the art of distributed systems programming, the same way we did with concurrent programming with local actors and Swift’s structured concurrency models embedded in the language.

This abstraction does not intend to completely hide away the fact that distributed calls are crossing the network, though. In a way, we are doing the opposite and programming assuming that calls may be remote. This small yet crucial observation allows us to build systems primarily intended for distribution and testable in local test clusters that may even efficiently simulate various error scenarios.

Distributed actors are similar to (local) actors because they encapsulate their state with communication exclusively through asynchronous calls. The distributed aspect adds to that equation some additional isolation, type system, and runtime considerations. However, the surface of the feature feels very similar to local actors."

Amusingly, in a section immediately followed by "Deja Vu All Over Again", Waldo et al. write:

"One conceptual justification for this vision is that whether a call is local or remote has no impact on the correctness of a program. If an object supports a particular interface, and the support of that interface is semantically correct, it makes no difference to the correctness of the program whether the operation is carried out within the same address space, on some other machine, or off-line by some other piece of equipment. Indeed, seeing location as a part of the implementation of an object and therefore as part of the state that an object hides from the outside world appears to be a natural extension of the object-oriented paradigm.

Such a system would enjoy many advantages. It would allow the task of software maintenance to be changed in a fundamental way. The granularity of change, and therefore of upgrade, could be changed from the level of the entire system (the current model) to the level of the individual object. As long as the interfaces between objects remain constant, the implementations of those objects can be altered at will. Remote services can be moved into an address space, and objects that share an address space can be split and moved to different machines, as local requirements and needs dictate. An object can be repaired and the repair installed without worry that the change will impact the other objects that make up the system. Indeed, this model appears to be the best way to get away from the “Big Wad of Software” model that currently is causing so much trouble.

This vision is centered around the following principles that may, at first, appear plausible:

• there is a single natural object-oriented design for a given application, regardless of the context in which that application will be deployed;

• failure and performance issues are tied to the implementation of the components of an application, and consideration of these issues should be left out of an initial design; and

• the interface of an object is independent of the context in which that object is used.

Unfortunately, all of these principles are false. In what follows, we will show why these principles are mistaken, and why it is important to recognize the fundamental differences between distributed computing and local computing."

RMI, some may recall, was heavily influenced by the above critique which lead to the very verbose and ceremonial characteristics of Sun's Java distributed system's APIs that were later criticized as cumbersome.

[1]: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...

[2]: http://www.cs.yale.edu/homes/aspnes/classes/465/notes.pdf


Java RMI was based on Objective-C distributed objects ideas actually, so it is no wonder that these things kind of go in circles.


In a Citadel, distributed Actors are going to be necessary.

See the following:

    Citadels: faster response time and better information integration than remote datacenters 
https://papers.ssrn.com/abstract=2836282


Probably not a good idea to burden application programmers

with having to annotate each message send with whether the

receiver is on the current computer or another one in a

Citadel.

Since Actors can move between computers in a Citadel, such

annotations are infeasible in practice.

Of course, modularity, security, and performance should not be

ignored.


This notion of an agent moving between computers has always been problematic for me to understand. If we accept that an agent ("actor") is a combination of code and state, and that code and state are both reducible to data, then an agent moving between computers is no more noteworthy than saying data moved between computers and that move engendered computation based on and over that data, which is a defined sequence of RPC calls: send code; send data; execute(code, data). What distinguishes an "actor" from passing scripts in RPC calls? I think it is a false conceptual paradigm: "actors" in general conceptual world carry their own processing ("mind") unit. This attribute is not reproducible in software.

So, if the Actor Model is in fact reducible to a complex RPC framework, the question remains if we should distinguish between RPC and local PC without throwing ~mystical bits about actors into the question.

In any event, I was wrong in my OP about Swift's approach: it is in fact the Waldo approach with distinct treatment of remote ops, except instead of annotations, it is language level support and instead of instrumenting existing code (ala Java byte codes), it is the compiler that writes the boiler plate for the underlying RPCs.


Yes, this is basically the new non-ObjC Distributed Objects or high level XPC interface.


How does this compare to DCOM? (disclaimer, I've never used it)


Much better.

Consider yourself lucky not having to deal with raw IDL without VS tooling and ATL template metaprogramming.


What's the TLDR on why this is a language feature vs a library?


Anything concurrent and distributed you want supported by your runtime a la Erlang.

The reason is: you want certain guarantees. For example, in Erlang you can set up a monitor on a process, and when that process exits for any reason, you get a notification, {EXIT, Reason}. Even if that reason was a catastrophic failure of some sort.

If this is implemented as a library, then the runtime can kill your code at any time, and you'll never know what happened.


I don't know if that's true. In the Scala and Java worlds, a library called Akka provides distributed actors as a library. As far as I know, what you describe isn't a limitation. See https://doc.akka.io/docs/akka/2.5.32/general/supervision.htm...


It is a limitation. If the VM on the other machine is killed, or kills the thread, you'll never know that happened, and why. Well, you can probably ping the process and what not :)

Compare Erlang's error handling and monitors: https://erlang.org/doc/reference_manual/processes.html#error...


I would assume Erlang is doing some type of heartbeat at the runtime level. You can always do this in your protocol if you absolutely need to. I know that in Akka Cluster, you configure this to set the rules for a node to be considered alive in the cluster, and if it's dead, all the actors hosted on it will be considered dead, too.

In a distributed system, I'm pretty certain you cannot guarantee receipt of an expected message, no matter what you try to do.


Erlang has absolutely no security :-(


So what kind of security does this have? The article doesn't mention the term.


After using a data structure first framework like Dask I can’t imagine ever going back to an actor oriented approach.


You still have value semantics to lean on. They integrate really well with Swift’s actor implementation.


This reminds me a lot of Erlang/Elixir's distributed processes: https://elixir-lang.org/getting-started/mix-otp/distributed-....




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: