I am in the middle, neither a dogmatic skeptic nor a full-blown prompt engineer, but I lost it when the author compared a junior developer (a human) to a SaaS subscription.
Tells you what you need to know about the AI culture.
I don't, that's what search engines are for. If people know what keywords to lookup, and are willing to go an extra mile (browse all pages of the Google search) they will eventually find your blog. If you have done a good job, it may land on the first page of search results.
> Where do you share your content?
Random short off-topic ramblings on X. Discussion oriented stuff on Reddit. Personal long-form opinion/perspectives on personal blog & knowledge base.
> why do you keep writing
For myself, I do not owe anyone anything, I don't plan to "build a brand" (or rather I have failed to do that), writing is a form of expression, that's it.
Whenever my gut says, this "thought" needs to get out of your head because you have been wasting a lot of time thinking about it, that's usually my cue to draft a post.
Soul of erlang got me hooked to Elixir recently, trying to get my hands dirty as well.
Other than distributed/concurrent system use-cases, could you share what kind of products are best when built with elixir/erlang compared to easier to write languages like Go, for example.
Anything that spends most of its time waiting on IO, and doesn't do a great deal of number crunching: i.e. any kind of server talking over a network socket.
Go, and any other language, lowers the barrier to running concurrent code, but there is much more to concurrent servers than concurrency: fault tolerance, isolation, shared state management, instrumentation, introspection, clustering, process migration. The BEAM and its ecosystem gives you all of that out of the box.
Also, the BEAM offers an immutable, functional environment. Data races are impossible, which are the biggest pain and source of heisenbugs in any kind of system with > 1 concurrent thread. This is huge. You can model your entire system as concurrent processes without ever having to deal with concurrency issues. You only ever have to think in "single-threaded" mode.
Its somewhat painful to watch modern languages and ecosystem reinvent (mostly in an inferior manner) things that Erlang (and Lisp) had since so long ago.
Any sufficiently complicated concurrent program in another language
contains an ad hoc informally-specified bug-ridden slow implementation
of half of Erlang.
As far as the application code goes, but most systems have databases which opens you up to all kinds of race conditions. Does Elixir help in that case?
My experience with databases in Erlang was using the pattern where one process recieves a high level message and does the database work was best. If you can split the queue in a sensible way, that gives you concurrency; for example if all the requests address a single user, you can hash users into as many buckets as you need for concurrency; if the individual requests take a long time, it might make more sense to keep a scoreboard of if a user has a process working for it, and send further requests there, otherwise to an idle process (or spawn).
You do need to manage this somehow, but the building blocks are there. Of course, the building blocks for transactionless database access are there too.
Modern relation databases are not a concern at all. You get race conditions if the developer has only a passing knowledge of SQL, because otherwise that's a basically solved problem, with the existence of transactions, isolation modes, etc. If all else fails, your query throws an error, but it does not corrupt your database.
Much more common instead are race conditions on local storage, external APIs and traditional memory races when using unsophisticated languages.
Races and deadlocks are certainly possible in pure Erlang/Elixir - no databases or global state required. There is no magic bullet.
For example, if you use blocking RPCs between two processes, they can deadlock waiting for each other's responses. Try implementing Dining Philosophers, you will learn a lot.
The answer is not to use blocking RPC, but that will challenge your brain topology. It takes some time to be able to lower yourself into the hot bath of full asynchrony, but when you do, it feels very very good.
I characterize it as the co-problem (in the sense of the dual of a problem). Distributing a problem, starting processes and pushing messages is easy. But it may be hard to know when you're finished. Everything is easy, but termination is difficult. It's the Erlang/Elixir Halting Problem.
It is not often mentioned as a strong feature for concurrent programming, but Erlang/Elixir have timeouts built into the language. And when you add OTP supervisor restarts, you can avoid some common programming mistakes through random evasion - don't do this by design, but it does help resilience.
Races within an elixir application are still possible it is just data races that are prevented. For example if you have a bank account in an elixir process nothing will stop you from implementing a withdrawal as 2 external operations: read balance, write balance which is inherently racy but not a data race.
What language and programming style did you use prior to using Elixir the most? Have you been able to build systems of similar or even larger size as previous projects in other languages, and what was that experience like (i.e. how did Elixir make things better)?
I've been doing this job for 20 years. I am fluent in C, Go, Rust, Python, as well as Elixir, and for my clients I have written concurrent systems running in production in all of these languages. I know how painful and complex the problem space is.
My opinion is not that rare, so instead of listing my credentials, you can search and find many others that have found this platform to be a great fit, simply because it has been designed to solve this very problem since 1986 when everybody else was focusing on single core, isolated systems.
Servers are 99% of the code I write, so yes I choose Elixir, though for very conservative clients and small projects I use Go.
For everything else, which is not a lot, there's Rust, Scheme, Lisp and many other fun languages to explore. My focus these days is on my business rather than consulting, so I have a lot of freedom.
How do you manage to convince employers of such language choices? (Do you need to convince?) And what kind of jobs or positions are that? I would love to use my skills like that, instead of building CRUD in Python, not really being able to apply my skillset, but employers are not ready to make the smallest leap it seems.
I've convinced the CEO that Elixir/BEAM/OTP is a good choice for a fairly large, multi-application project. It wasn't very hard, factors like high availability, the same programming language and runtime in the entire system, extremely fast prototyping, battle tested in absurdly demanding settings, sounds very nice to a business strategist able to understand at least some of the implications.
The drawbacks are basically in recruiting and a few other areas and quite manageable.
Recruiting was exactly the one argument that I got to hear. That hiring people would be difficult and people demanding high wages and so on. I could not convince an employer to not weigh that heavily. At least that is the brought forward argument. There might also be an element of "don't know it myself, don't want it in my company".
I worked at WhatsApp, one of the big Erlang users, almost none of our server team knew Erlang before joining, including me. Until we hired someone who actually used it before, I was the most knowledgeable pre-hire, because I remember seeing a post about it when Erlang open sourced it.
Yes, we probably could have done some things better if we had a bit more Erlang experience on our team; at least while I was there, none of our applications were properly packaged as OTP applications, and maybe that could have been useful. But overall, we were smart, experienced server people who were willing to learn Erlang and we were handed a tool that fit our needs very well, so we all got Erlang books and figured it out. If you can recruit smart, experienced server people who are willing to learn a new language, you don't have a recruiting problem. I haven't personally worked with Elixir, but I feel like most of the unfamiliarity is going to come from the underlying BEAM and OTP, so same difference; Elixir just has different syntax and macros are more heavily used, IMHO both syntaxes are going to be unfamiliar to most.
That is exactly what I think. If you got decent and educated developers, they should be up to speed fairly quickly. But management layers often have no trust at all, even if it would take maybe merely 2 weeks to be able to do basic implementations in a new language and ecosystem. Basically it means, that we cannot possibly spend 2 weeks becoming better engineers, but we can spend infinite time on wrangling with lesser tools.
I would love the chance to learn more Erlang (looked at the beginning of "Learn you some Erlang for great Good") or Elixir (used in last year's AoC) on a job and get to use OTP, watch it run my function calls on multiple machines and all that. I know a lot about functional programming, as I do it in my free time (big Scheme fan).
As it is currently, I cannot apply my skills at the job. For example when I think that some code should not mutate some state, but rather use pure functions and the tests should be simply function calls and checking the output, then I don't get the time to do that, nor the time to show how this would look like and how it would make things simpler. No one aside me on the job seems to be interested in purely functional data structures/persistent data structures either, which sooner or later are necessary, if one wants to make things purely functional. So basically I am the only person with that knowledge and cannot apply it. It is so dull.
That'll be my approach, together with knowing some people that know some people who would probably enjoy finally getting to use the BEAM in prod we're betting that clever candidates that find our business interesting will also have an easy time learning this specific tooling.
It takes a bit of getting used to pattern matching and the overall functional/declarative approach when coming from a more imperative background, but I believe learning the ins and outs of the business niche will generally be harder and take more time.
In some companies handling recruitment issues isn't enough, investors or shareholders might be worried about 'exit' and how to maximise it for themselves, and refuse tooling they perceive as obscure and expect to lower bids to buy them out.
I use both on a regular basis for many years now (and some others), and I'd say it depends.
Elixir matches the way my brain thinks somehow, with (kind of) pure functions transforming data step by step, and when you can break down some task in such a way its only a simple set of pipelines (|>) its really great and feels great. It is also exceptionally cool if you spawn Agents to hold some global state where multiple processes talk with it, plus the integrated stateful introspection/debugging is just chefs kiss. In the context of pg's "Blub Paradox" Elixir is an acceptable Lisp (not homoiconic, but with modern tooling). You can solve very complicated problems in very clean ways. Also often underrated: Phoenix/LiveView is probably the best escape hatch out of JS frontend hell alltogether, and leads to better outcomes (performance, scalability, sanity, maintainability, ...) compared to JS frameworks.
On the other hand, sometimes I have a very dirty real life thing I want to achieve. Like doing some Unix stuff, interacting with some ugly APIs, implementing an given imperative algorithm to brute force a problem quickly within reasonable constraints, manipulate some image file, automate some adhoc outlook365 process, ... you name it. The weakness of Go (extremely simple/plain, verbosity, boilerplate, ...) here is actually the strength of it in these cases, but took me a while to realize. In Go, I don't even care anymore to make anything "elegant" (which is very tempting in Elixir!), but write the absolute straightforward series of steps in brutal directness, including nested loops and very long functions. This leads to rapid dirty work solving, and has the added benefit of trivial distribution (cross-compile to a single self contained binary) to other folks that don't have dev dependencies installed. Also I rely solely and the compiler/linter for this type of code and have zero tests for it (I am not going to mock the filesystem interactions and all that for basically a better ad hoc shell script).
So for the big/complex/scalable projects, I think Elixir/Phoenix is just perfect in terms of a "web stack", only a few rough edges left in iE the docs for beginners looking into LiveView. But for the small ad-hoc stuff or in situations where I can "brute force" my way through an ad hoc problem, Go is it.
Most people? I have no love lost for Go, but most (if not all) programmers have experience with imperative code, making it very easy to learn, even if the syntax is ugly. Elix is from a different paradigm, so you have to learn that in addition to the syntax.
Not sure how the 'best products' constraint is supposed to be interpreted, but Elixir is a more flexible tool for developers than Golang. It has a decent REPL, can be used for scripting, and so on.
I haven't used Golang for things like binary protocols so I can't really compare, but Elixir or Erlang would be a good fit since they're very good for expressing grammars and fundamentally treat strings as byte sequences.
If pattern matching helps you express your problem domain succinctly they're also a good fit. Same goes for macros. My impression is that Golang commonly requires quite verbose or complex code compared to Elixir.
I expect raw number crunching performance to be better in Golang, but BEAM processes are very lightweight so it might win on either performance or developer ergonomics if the task can be solved in parallel.
> Out of all of my abandoned side-projects, this was the one that made me think differently. Even if I would never actually use the end 'deliverable', working on the project still indirectly achieved what I'd set out to do. That led me to an important realisation: we talk a lot about abandoned side-projects as "failed", but their success is really a matter of perspective.
Very much agreed here, abandoning things helps us eventually priortise other things that we learned from the exercise of building the original thing. I briefly wrote about a similar experience on how thinking too much about maintaining a project for a longer period of time is not really a good idea.
Hey this looks interesting, will try it out. Thanks for writing it!
> That said, in a real-world scenario where I care about readability and maintainability, I'd either write this in Go with gzip-tar compression in the middle (single statically-compiled binaries for the win!) or I'd just use Busybox (~5MB base image) and copy what's missing into it since that base image ships with libc.
Agreed, rewriting was not the option (as mentioned in the beginning). Moreover, It would have taken longer to build a nice TUI interface then it took to dockerize it.
> I think it would be more accurate to say, in the Alpine ecosystem, it is generally not advised to pin versions of packages at all. Actually, this is not so much a recommendation as it is a statement of impossibility: You can't pin package versions (without your Docker builds starting to fail in a week or two), period. In other words: Don't use Alpine if you want reproducible (easily cacheable) Docker builds.
Agreed, should have been clear with my sentiment there. Thanks for stating this :)
> Personally, I'm very excited about snapshot images like https://hub.docker.com/r/debian/snapshot where all package versions and the package sources are pinned. All I, as the downstream consumer, will have to do in order to stay up-to-date (and patch upstream vulnerabilities) is bump the snapshot date string on a regular basis.
This is really helpful, thanks for sharing. Looks like it will be a good change, fingers crossed.
> How would I use this? Say I just made a bad commit in my terminal. How would I run this container to fix it? The container doesn't have my working directory does it? Or is that the idea, to mount a volume with the working for or something?
> So if you do that and just give me a one liner install command to copy paste then I guess this actually makes sense. A small docker container could eliminate a lot of potential gotchas with trying to install dependencies in arbitrary environments.
Yes, that was also an internal motivation behind doing this.
> Why does it need fzf? Is it intended to run the container interactively?
Hey fzf is required by ugit (the script) itself. I didnt want to rely on cli arguments to give ability to users undo command per a matching git command. Adding a fuzzy search utility makes it easier for people to search what they can undo about "git tag" for example.
Yes, the size of env closes to 2mb. I maybe wrong here, though. Seems something is wrong.
I wasn't able to dig deep enough on why that was the case, considering the "env" utility was coming from busybox which on copy averages close to 900Kb.
> copying the various standardized CLI tools and related library files into the image versus installing them with APK can introduce _many_ compatibility challenges down the road as new base Alpine versions are released which can be difficult to detect if they don't immediately generate total build errors
I'm maybe missing some context here, so you are saying that the default location of these binaries can change (the one's that get copied directly)? Or is it about the shared libraries getting updated and the tools depending on these libraries will eventually break?
Given that you start out with a 31.4 MB image, I don't honestly think the introduced complexity in your build is worth the it. It's a good lesson, for people would doesn't know about build images and ships an entire build pipeline in their Docker image, for a bash script and a <50 MB image the complexity is a bit weird.
Can't necessarily speak for the author, but here's one thing that can happen:
If the underlying system has a newer version of git than the one freeze-dried into your container, repositories managed there by native-git might be in a new format which container-git can't handle. (There might be some new, spiffier way of handling packs, for instance, or they might have finally managed to upgrade the hash function.) And similar issues potentially arise for everything else you're packaging.
No, what I'm saying is you're blanket copying fully different versions of common library files into operating system lib folders as shown above, possibly breaking OS lib symlinks and/or wholly overwriting OS lib files themselves in the process for _current_ versions used in Alpine OS if they exist now or in the future, potentially destroying OS lib dependencies, and also overwriting the ones possibly included in the future by Alpine OS itself to get your statically copied versions of the various CLI tools your shell script needs to work. The same goes for copying bash, tr, git, and other binaries to OS bin folders. No No NO!
That is _insanely_ shortsighted. There's a safe way to do that and then there is the way you did it. If you want to learn to do it right and are deadset against static binary versions of those tools for the sake of file size, look at how Exodus does it so that they don't destroy OS bin folders and library dependency files in the process of making a binary able to be moved from one OS to another.
This is why I'm saying your resulting docker image is incredibly fragile and something I would never depend on long-term as it's almost guaranteed to crash and burn as Alpine OS upgrades OS bins and lib dependency files in the future. That it works now in this version is an aberration at best and in reality, there probably are things that are broken in Alpine OS that you aren't even aware of because you may not be using the functionality you broke yet.
OS package managers handle dependencies for a reason.
Relax. While I wouldn't recommend OPs approach either. But you're not particularly right either.
Exodus clearly states:
> Exodus is a tool that makes it easy to successfully relocate Linux ELF binaries from one system to another... Server-oriented distributions tend to have more limited and outdated packages than desktop distributions, so it's fairly common that one might have a piece of software installed on their laptop that they can't easily install on a remote machine.
Exodus is specifically designed for moving between different systems.
He is largely moving from the same base image. In the article base layer is `alpine:3.18` and the target image is `alpine:3.18` and in the latter part of the article `scratch` (less to zero conflict surface). One would assume those two would be coupled.
There are other technical merits to not doing what he's doing but you haven't listed any and dismissed his work. I'd venture if you actually knew what you're talking about you'd have better things to add to this conversation than "OS package managers handle dependencies for a reason."
Perhaps next time give some feedback that would help the writer get closer to a well-working exodus like solution. It's hackernews, "dont roll your own" discouragement should be frowned upon.
Tells you what you need to know about the AI culture.