I had already written large, nontrivial apps (linked in article) which required more libraries than babashka was written with, including ones I had written but also others. I therefore needed to run native-image on my own codebase, as it was not runnable from within babashka (at the time? I don't know if it is now).
Running native-image on an already established, not-written-for-it codebase is a nightmare. I just tried again some months ago on the linked code bases. native-image wouldn't budge. Kept getting hung up on I don't even know what, the errors were way too opaque or the app just misbehaved in weird ways.
Ok, that explains why babashka wasn't suitable. I still wonder, though, about the requirement to have an executable.
I still remember many years of reading comp.lang.lisp, where the #1 complaint of newcomers was that "common lisp cannot produce a native executable". I remember being somewhat amused by this, because apparently nobody expected the same thing from other languages, like, say, Python. But apparently things have changed over the years and now CL implementations can produce bundled executables while Clojure can't — how the tables have turned :-)
I think various Lisp implementations have their own way to do it, e.g save-lisp-and-die on SBCL.
But, if what you mean "executable" is "small compact executable like the one build by C/C++/Pascal, without extra Lisp runtime attached", perhaps you better look at something else, well like C.
There is already confusion. Things are different and the same words (executable, native, image, ...) mean slightly different things.
In the CL world it is usually relatively easy to create an executable. For example in SBCL I would just run SBCL and then save an image and say that it should be an executable. That's basically it. The resulting executable
* is already native compiled, since SBCL always compiles native -> all code is AOT compiled to machine code
* includes all code and data, thus there is nothing special to do, to change code for it -> the code runs without changes
* includes all the SBCL tools (compiler, repl, disassembler, code loader, ...) thus it can be used to develop with it -> the code can be still "dynamic" for further changes
* it starts fast
Thus I don't need a special VM or special tool to create an executable and/or AOT compiled code. It's built-in in SBCL.
The first drawback: the resulting executable is as large as the original SBCL was plus any additional code.
But for many use cases that's what we want: a fast starting Lisp, which includes everything precompiled.
Now it gets messy:
In the real world (TM) things might be more complicated:
* we want the executable to be smaller
* we want to get rid of debug information
* we need to include libraries written in other languages
* we want faster / more efficient execution at runtime
* we need to deliver the Lisp code&data as a shared library
* we need an executable with tuned garbage collector or without GC
* the delivery structure can be more complex (-> macOS application bundles for multiple architectures)
* we want to deliver for platforms which provide restrictions (-> iOS/Apple for example doesn't let us include a native code compiler in the executable, if we want to ship it via the Appstore)
* we want the code&data be delivered for an embedded application
That's in the CL world usually called delivery -> creating an delivered application that can be shipped to the customer (whoever that is).
This was (and is) typically where commercial CL implementations (nowadays Allegro CL and LispWorks) have extensive tooling for. A delivered LispWorks application may start at around 7MB size, depending on the platform. But there are also special capabilities of ECL (Embeddable Common Lisp). Additionally there were (and still are) specialized CL implementations, embedded in applications or which are used as a special purpose compiler. For example some of the PTC Creo CAD systems use their own CL implementation (based on a ancestor implementation of ECL), run several million lines of Lisp code and expose it to the user as an extension language.
I actually just dusted off my old Clojure stuff to see if it was a "solved problem", and it isn't.
I grant that it might be described thus if I started out with that stack, but trying to retrofit an older code base with it is, I have found, next to impossible. You have to code around the myriad gotchas as you go or you're never going to identify all those landmines going back over it after the fact. The errors and bad behaviors are too difficult to identify, even for the `native-image` tooling.
If you don't do what everybody is doing to solve a problem, then of course it is not a "solved problem" for you.
No, you don't need to code specifically for native-image. What are the landmines that you need to code around? Since you have not successfully compiled native-image by following common practices, you obviously don't know.
Why do you imagine the OP would push a failed experiment to a publicly available spot? They probably left it sitting in a local branch and went on with their day.
Himalaya makes it pretty easy to write cli tools and automate email workflows. It pairs well with August, another rust project that can render html to text on the terminal. I wrote a git email patch automation tool around Himalaya so that people can receive email patches easily[1].
UBI in the 2010s, 4 10s in the late 2000s. These things never stick. 5 8s is a great compromise between employers and employees. Every time someone tries something different they decide they don't like it, either the employer or the employee.
My employer hired me for 4 tens. I felt like a zombie during the week. I'd much rather have a few extra hours every day than a Friday to myself, especially since I need to go into the office on Friday's occasional anyways. I moved back to 5 8s.
The problem is that so many people being paid for 5x8 are actually working 5x9 or 5x10.
At least for myself, I was raised with the notion that working harder would be recognised and eventually rewarded. That has not turned out to be the case, so I am seeking a rebalance.
The argument against is, 5 day work week only started ~100 years ago. Growing up, I thought, as humanity we would automate as much as we can, and lower the work week as well. Didn’t think people will consider that as “failure or laziness”.
The problem is, you can’t let the competitive market to make it happen, as your competition will eat you. The best thing you get is talent retention. But not the shareholder value or the products that you might be manufacturing.
Apple does a lot of good stuff, But remember that their whole business model is selling hardware. They have no financial interest in making it easy to continue to use old phones.
They have to maintain a balance that still incentivizes current purchases. Otherwise it’ll be a constantly trend of “don’t buy now, support might not last.”
In terms of length of official support, and aftermarket value, Apple is at the top of the game. Those strike me as the most important metrics here.
And while you might think that once official support is over, that's the end of the story, this is far from true. Those phones end up in developing markets, where there's an entire cottage industry dedicated to keeping them going. Jailbreaking is getting harder, so that might stop being an option eventually, but so far it's viable, and that's what happens.
This isn't as true as it used to be, now that Apple is getting increased revenue from subscriptions. If your old iPhone continues to work well, then Apple has a better chance of selling you Apple Music, Apple TV, etc. etc.
Old phones no, but old apps yes. If a developer has abandoned an app and hasn't been investing in the update treadmill, but end users still care about it, that can make people feel negatively about Apple.
> Old phones no, but old apps yes. If a developer has abandoned an app and hasn't been investing in the update treadmill, but end users still care about it, that can make people feel negatively about Apple.
On the other hand, it is well within the standard Apple approach to say "here's how we want people to use our hardware. We are well aware that this is not consistent with how some potential and past users want to use the hardware, but we are comfortable with losing those customers if they will not adapt to the new set-up."
I know it's not the Apple approach, I'm just pointing out an interpretation that it isn't particularly focused on end user needs in this area.
I feel like it's mostly an attitude about where to focus engineering resources, which is very "inside baseball", but people have post hoc justifications that it's really what end users want anyway.
> Let's make CRLF one less thing that your grandchildren need to know about or worry about.
The struggle is real, the problem is real. Parents, teach your kids to use .gitattribute files[1]. While you're at it, teach them to hate byte order marks[2].
Nope. Git should not mess with line endings, the remote repository not matching the code in your local clone can bite you when you least expect it. On Windows, one should disable the autocrlf misfeature (git config --global core.autocrlf false) and configure their text editor to default to LF.
3000% agree. I have been bitten endlessly by autocrlf. It is absolutely insane to me that anyone ever considered your having your SOURCE CONTROL tool get/set different content than what's in the repo
This is impractical in many situations, because tools that process build-source files (for example XML files that control the build, or generated source files) inherently generate CRLF on Windows. These are many, many, many tools, not just one’s text editor.
If you're using tools that only support one particular line ending, the solution isn't to convert your source automatically in the repo, it's to know your source is always stored in one format, and convert it back/forth when a tool needs it to be different.
How would you handle two different tools only supporting disjoint line endings?
The tools generally don’t care about the line endings on input, but they do generate their output using the platform’s line endings. If you don’t normalize the line-endings upon commit, then using the tools on Unix and Windows will alternatingly switch the line endings in the repo between LF and CRLF. And you don’t want to have those constant changes in your history and in the diffs.
Having Git normalize line endings for the relevant file types upon check-in is by far the simplest solution to this problem, in particular since having .gitattributes in the repository means that all clients automatically and consistently perform the same normalization.
If everyone’s on Windows, or if the tool always generates/requires CRLF, then you should store the files with CRLF line endings in the repository. In a mixed Windows/Linux environment, I would still prefer to handle this myself rather than expecting Git to mangle line endings.
Sure, don't use autocrlf. But some _windows_ tools need crlf, like for powershell or batch build scripts. Defaulting to lf in the editor will not save you.
Don't use `auto`, full marks, but the gitattributes file is indispensable as a safety net when explicit entries are used in it.
I mean, the whole point of the file is not everyone who is working on the project has their editors set to lf. Furthermore, not every tool is okay with line endings that are not CRLF.
When used properly (sure, ideally without auto), the git attributes file as a lifesaver.
When LF doesn’t work (e.g. in cmd), you can always use something like .editorconfig to tell editors to use CRLF, and you should keep those files as CRLF in the repo.
This largely concurs with clean architecture[1], especially considering his foreword containing hindsight.
Clean architecture can be summarized thusly:
1. Bubble up mutation and I/O code.
2. Push business logic down.
This is how it's stated in [1]:
> The concentric circles represent different areas of software. In general, the further in you go, the higher level the software becomes. The outer circles are mechanisms. The inner circles are policies.
Inlining as a practice is in service of #1, while factoring logic into pure functions addresses #2, noted in the foreword:
> The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing. When it gets to be too much to take, figure out how to factor blocks out into pure functions (and don.t let them slide back into impurity!).
I learned Racket at Brigham Young University in Utah, and Clojure, then Common Lisp on my own. I don't think it's just an MIT thing. It's in the ethos now. Lots of people know about SICP, for example.
I actually take this as evidence that Rust will always remain niche. It's just a very complicated language. Go or Zig is much easier to learn and reason about.
In Go you can immediately tell what the fields are in a config yaml file just by looking at struct annotations. Try doing that with Rust's Serde. Super opaque in my opinion.
> Rust will only protect me from things my customers don't care about and don't understand.
That's the wrong kind of protection. Rust should protect you from things people other than your customers (who presumably are well behaved) care about.
Running native-image on an already established, not-written-for-it codebase is a nightmare. I just tried again some months ago on the linked code bases. native-image wouldn't budge. Kept getting hung up on I don't even know what, the errors were way too opaque or the app just misbehaved in weird ways.