The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).
It's not that shared-xor-mutate magically solves everything, it's that shared-and-mutate magically breaks everything.
Same thing with goto and pointers. Goto kills structured programming and pointers kill memory safety. We're doing fine without both.
Use transactions when you want to synchronise between threads. If your language doesn't have transactions, it probably can't because it already handed out shared mutation, and now it's too late to put the genie in the bottle.
> This, we realized, is just part and parcel of an optimistic TM system that does in-place writes.
+5 insightful. Programming language design is all about having the right nexus of features. Having all the features or the wrong mix of features is actually an anti-feature.
In our present context, most mainstream languages have already handed out shared mutation. To my eye, this is the main reason so many languages have issues with writing asynch/parallel/distributed programs. It's also why Rust has an easier time of it, they didn't just hand out shared mutation. And also why Erlang has the best time of it, they built the language around no shared mutation.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.
I haven't written any Go in many years (way before generics), but I'm shocked that something so implicit and magical is now valid Go syntax.
I didn't look up this syntax or its rules, so I'm just reading the code totally naively. Am I to understand that the `user` variable in the final return statement is not really being treated as a value, but as a reference? Because the second part of the return (json.NewDecoder(resp.Body).Decode(&user)) sure looks like it's going to change the value of `user`. My brain wants to think it's "too late" to set `user` to anything by then, because the value was already read out (because I'm assuming the tuple is being constructed by evaluating its arguments left-to-right, like I thought Go's spec enforced for function arg evaluation). I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
I'm obviously wrong, of course, but whereas I always found Go code to at least be fairly simple--albeit tedious--to read, I find this to be very unintuitive and fairly "magical" for Go's typical design sensibilities.
No real point, here. Just felt so surprised that I couldn't resist saying so...
> I would think that the returned value would be: `(nil, return-value-of-Decode-call)`.
`user` is typed as a struct, so it's always going to be a struct in the output, it can't be nil (it would have to be `*User`). And Decoder.Decode mutates the parameter in place. Named return values essentially create locals for you. And since the function does not use naked returns, it's essentially saving space (and adding some documentation in some cases though here the value is nil) for this:
func fetchUser(id int) (User, error) {
var user User
var err Error
resp, err := http.Get(fmt.Sprintf("https://api.example.com/users/%d", id))
if err != nil {
return user, err
}
defer resp.Body.Close()
return user, json.NewDecoder(resp.Body).Decode(&user)
}
yeah, not really an expert but my understanding is that naming the return struct automatically allocates the object and places it into the scope.
I think that for the user example it works because the NewDecoder is operating on the same memory allocation in the struct.
I like the idea of having named returns, since it's common to return many items as a tuple in go functions, and think it's clearer to have those named than leaving it to the user, especially if it's returning many of the same primitive type like ints/floats:
> Are there any good resources on optimizing python performance while keeping idiomatic?
At the risk of sounding snarky and/or unhelpful, in my experience, the answer is that you don't try to optimize Python code beyond fixing your algorithm to have better big-O properties, followed by calling out to external code that isn't written in Python (e.g., NumPy, etc).
But, I'm a hater. I spent several years working with Python and hated almost every minute of it for various reasons. Very few languages repulse me the way Python does: I hate the syntax, the semantics, the difficulty of distribution, and the performance (memory and CPU, and is GIL disabled by default yet?!)...
Ints probably get a big boost in languages where the only built-in for-loop syntax involves incrementing an index variable, like C. And, speaking of C, specifically, even the non-int types are actually ints or isomorphic to ints: enums, bools, char, pointers, arrays (which are just pointers if you squint), etc...
But, otherwise, I'd agree that strings probably win, globally.
Actually because of provenance the C pointers are their only type which isn't just basically the machine integers again.
A char is just a machine integer with implementation specified signedness (crazy), bools are just machine integers which aren't supposed to have values other than 0 or 1, and the floating point types are just integers reinterpreted as binary fractions in a strange way.
Addresses are just machine integers of course, but pointers have provenance which means that it matters why you have the pointer, whereas for the machine integers their value is entirely determined by the bits making them up.
It's been a long time since I've done C/C++, but I'm not sure what you're saying with regard to provenance. I was pretty sure that you were able to cast an arbitrary integer value into a pointer, and it really didn't have to "come from" anywhere. So, all I'm saying is that, under-the-hood, a C pointer really is just an integer. Saying that a pointer means something beyond the bits that make up the value is no more relevant than saying a bool means something other than its integer value, which is also true.
That's defect report #260 against the C language. One option for WG14 was to say "Oops, yeah, that should work in our language" and then modern C compilers have to be modified and many C programs are significantly slower. This gets the world you (and you're far from alone among C programmers) thought you lived in, though probably your C programs are slower than you expected now 'cos your "pointers" are now just address integers and you've never thought about the full consequences.
But they didn't, instead they wrote "They may also treat pointers based on different origins as distinct even though they are bitwise identical" because by then that is in fact how C compilers work. That's them saying pointers have provenance, though they do not describe (and neither does their ISO document) how that works.
There is currently a TR (I think, maybe wrong letters) which explains PNVI-ae-udi, Provenance Not Via Integers, Addresses Exposed, User Disambiguates which is the current preferred model for how this could possibly work. Compilers don't implement that properly either, but they could in principle so that's why it is seen as a reasonable goal for the C language. That TR is not part of the ISO standard but in principle one day it could be. Until then, provenance is just a vague shrug in C. What you said is wrong, and er... yeah that is awkward for everybody.
Rust does specify how this works. But the bad news (I guess?) for you is that it too says that provenance is a thing, so you cannot just go around claiming the address you dredged up from who knows where is a valid pointer, it ain't. Or rather, in Rust you can write out explicitly that you do want to do this, but the specification is clear that you get a pointer but not necessarily a valid pointer even if you expected otherwise.
it is kind of fun to me that most C programmers believe that ISO C exists/is implemented anywhere, when in reality we have a bunch of compilers that claim to compile iso C, but just have a lot of random basically unfixable behavior.
I do believe the C standards committee got it completely backwards with regards to undefined behaviour optimisations. By default the language should act in a way that a human can reason about, in particular it should not be more complicated than assembly. Then, they can add some mechanism for decorating select hot program blocks as amenable to certain optimisations[1]. In the majority of the program the optimisation of not writing a single machine word to memory before calling memcmp is not measurable. The saddest part is that other languages like Rust and Zig have picked all this up like cargo cult language design. Writing code is already complicated enough without having to watch out for pitfalls added so the compiler can achieve one nanosecond faster time on SPECint.
[1] As an aside, the last time I tried to talk to a committee representative about undefined behaviour optimisation pitfalls, I was told that the standard does not prescribe optimisations. Which was quite puzzling, because it obviously prescribes compiler behaviour with the express goal of allowing certain optimisations. If I took that statement at face value, it would follow that undefined behaviour is not there for optimisation's sake, but rather as a fun feature to make programming more interesting...
A pointer derived from one allocation and a pointer from another allocation does not need to compare the same, even if the address is. And you will likely invoke UB. This is because C tries to be portable and also support segmented storage.
A pointer derived from only an integer was supposed to alias any allocation, but good luck discussing this with your optimizing C compiler.
There are multiple angles. As the stewards of GTK, they should, IMO, try to keep it flexible and customizable to whatever extent is manageable and reasonable. This post is about Mutter, which is a window manager, which should have very little to do with the app "ecosystem". They can, and should, do whatever the hell they want with Mutter, GNOME Shell, Nautilus/Files, etc.
Even in the link you posted, they're talking about GNOME, not GTK.
> things like primarily designing the interface for a touch screen, despite PC touch screens not really taking off.
That was actually an absolute godsend using the Pinephone, and IMO laid the groundwork for the Librem 5 (and modern Linux-on-Mobile interfaces) to take root. I do not believe PostmarketOS would be doing as well as it is if they didn't have apps that play nicely with touch.
You don't use it, and you don't appreciate it, and that's fine. I'd say it most defintitely has a place though, without even touching on the chicken-and-egg bit about touchscreen / mobile Linux not taking off vs Gnome pushing for touchscreen / adaptability before it goes mainstream
I really don't understand why we need to absolutely ruin desktop UIs in order to have mobile interfaces. For web UIs it may be argued as a necessary evil as designing multiple front-ends is expensive and reactive UIs can theoretically be made to exist and shown in small demos to be decent, but when designing desktop applications?
Having a framework that can be adaptable, like GTK, allows for padded, but IMO reasonably-sized touch targets. Designing an adaptive desktop app means the effort is only spent once, but can kickstart the virtuous cycle of "Mobile Linux is less trash than it used to be" -> More users are willing to use Mobile Linux -> More effort is spent making it less trash.
Though if you insist on click-targets that are exclusively for the mouse, I've found most KDE apps less mobile-optimized. The elderly and mobile users can appreciate larger touch-targets, and you can avoid GTK, which seems like a perfect compromise
Point taken on GTK, and I can't really disagree since I haven't even poked at writing a GTK GUI in many years.
But, you still couldn't resist complaining about the UI implementations, which sounds more like complaints about GNOME apps and GNOME Shell. Who cares if you think that GNOME Shell looks like it accommodates touch screens? Firefox, for example, uses GTK and doesn't seem to look like a touch screen UI to me as I'm typing into this text box.
The problem isn't that they accommodate touch screens, but that they do so at the expense of keyboard and mouse users, and then they push these changes to GTK in a way where keyboard-and-mouse interfaces become clunkier and GTK-developed UIs become very hard to integrate with other desktop environments.
Firefox has definitely been affected by this. The hamburger button is a touch paradigm which makes no sense on a large desktop screen with a mouse and keyboard-control scheme. It only serves to add more clicks to every interaction. Likewise the reduction of the scrollbar to a scroll indicator.
I was sad when Gnome 2 became Gnome 3 because I really liked Gnome 2 and Gnome 3 was broken. Then I moved on, but where ever I went insanity from the Gnome project kept leaking and making UIs worse.
I read them, but I'm not sure what point you're trying to make, or why it's directed at my comment. I mean that genuinely.
This Xfce dev says that GTK4 is less capable than GTK3, and they feel that GTK5 will continue in that direction. They also acknowledge certain things in the first comment:
> [0] Full disclosure: I'm an Xfce developer, and have been disappointed with the direction GTK has been taking for some time. I don't begrudge them their prerogative to do what they need/want to achieve their own goals with the toolkit they've built and maintain. But it really is making life more difficult for me.
>
> [1] Part of the argument is that Wayland doesn't natively support things like cross-process embedding, so a cross-platform toolkit shouldn't have these types of widgets (the classic problem of only being able to support the lowest common denominator). But a) you can absolutely build something like that for Wayland (something I've been working on, though it requires tens of thousands of lines of code to do), and b) with other changes, it's incredibly difficult and possibly impossible to even implement the XEMBED protocol on GTK4, for people who do only care about X11.
If the GNOME guys took out stuff from GTK4 or 5 for bad reasons, then I don't like that, either. Which is basically exactly what I said. However, it sounds like some of these changes would be hard to do and maintain well, such as cross-process embedding. Perhaps the GNOME devs made a decision to focus their surely limited resources toward things they think will be long-lasting. And, perhaps, by their estimation, trying to support Wayland and X11 by adding (and maintaining) tens of thousands of lines of code would be a big burden--especially if they believe that X11 is not going to be super-relevant in the near future. I don't agree with that estimation, and I assume that it'll be a very long time before X11 isn't necessary anymore, but so be it.
All that said, it still has nothing to do with Mutter, which is why I replied to the comment that I did. Because GTK, and Mutter, and GNOME Shell, and GNOME apps, and non-GNOME GTK apps, are all different things, and this post was about Mutter.
You raised GTK stewardship. I replied about GTK stewardship. Why raise GTK stewardship and complain replies are not about Mutter exclusively? Why raise GTK stewardship and dismiss it saying so be it?
The 2 paragraphs you quoted did not represent the 10 you did not.
An Xfce developer saying they can't recommend GTK for new projects outside the GNOME umbrella had information your comment did not. It was not basically exactly what you said.
Unfortunately, the context starts getting lost as we get deeper into discussion threads like this, but originally, I brought up GTK stewardship because I felt that the top few comments in this thread started to conflate the various projects developed by the GNOME organization. The original HN post was about Mutter, and the first few comments in this reply chain were about software being customizable, etc. Those could've been about whether it's okay or not for Mutter to lose flexibility. But, the one I replied to started complaining about software "imposing limitations on the rest of the ecosystem".
That's when and why I decided to point out that there are different kinds of software projects, and they have different goals and priorities. It's like the old "library vs. application" code: libraries are generic and reusable, and should be written as such, whereas applications are specific and focused.
I brought up GTK simply as an example of a "library project", for which critique of its reusability is warranted, as a counter-example to Mutter, which is an application. Complaining about Mutter's effect on "the ecosystem" is silly. It wouldn't make any less sense to complain about XTerm's effect on the ecosystem by it not supporting Wayland. Anybody in their right mind would just say "So, use one of the other 10,000 terminal emulators in Wayland instead of XTerm"--and rightly so. But, because Mutter is a GNOME project, and GTK is also a GNOME project, I think that people lose focus on what they're talking about.
I did engage with you about GTK because it's interesting, but my point in bringing up GTK was specifically to say "Yeah, those complaints might make sense if we were talking about GTK, but since we're talking about Mutter, they do not." to the comment I replied to.
Your perspective is more clear now. But I disagree.
The article was not about Mutter exclusively. It explained Mutter dropping X11 made GNOME strictly focused on Wayland-based environments. And GNOME Shell would be tied to Mutter even if the article didn't mention it.
Mutter is a library. GNOME is not the only desktop environment which uses it. I don't know if the Pantheon developers wanted to drop X11. But Mutter dropping X11 imposes this limitation on Pantheon.
Your claim the Transmission discussion was not about GTK was incorrect by the way. The GNOME developer said they hadn't decided if GTK would deprecate GtkStatusIcon. The Transmission developer requested GTK make an abstraction. The last comments were a GNOME developer recommending Transmission change architecture so it could support GNOME and in his words whatever odd desktop people want to use after GTK deprecated GtkStatusIcon.
Libraries and applications are not separate inherently. Apple's application scripting architecture was a great strength when it was more supported for example.
Probably yes. And, good. It's free software. I still use GNOME Shell, and the minute the make a change that I don't want to deal with, I'll change to something else. Easy as that.
Yes. The first time I heard/read someone describe this idea of managing parallel agents, my very first thought was that this is only even a thing because the LLM coding tools are still slow enough that you can't really achieve a good flow state with the current state of the art. On the flip side of that, this kind of workflow is only sustainable if the agents stay somewhat slow. Otherwise, if the agents are blocking on your attention, it seems like it would feel very hectic and I could see myself getting burned out pretty quickly from having to spend my whole work time doing a round-robin on iterating each agent forward.
I say that having not tried this work flow at all, so what do I know? I mostly only use Claude Code to bounce questions off of and ask it to do reviews of my work, because I still haven't had that much luck getting it to actually write code that is complete and how I like.
The idea that making things immutable somehow fixes concurrency issues always made me chuckle.
I remember reading and watching Rich Hickey talking about Clojure's persistent objects and thinking: Okay, that's great- another thread can't change the data that my thread has because I'll just be using the old copy and they'll have a new, different copy. But now my two threads are working with different versions of reality... that's STILL a logic bug in many cases.
That's not to say it doesn't help at all, but it's EXTREMELY far from "share xor mutate" solving all concurrency issues/complexity. Sometimes data needs to be synchronized between different actors. There's no avoiding that. Sometimes devs don't notice it because they use a SQL database as the centralized synchronizer, but the complexity is still there once you start seeing the effect of your DB's transaction level (e.g., repeatable_read vs read_committed, etc).