Creating a Git commit using low-level commands was always something I wanted to do, but I never found the time to really deepen my knowledge of Git. I have actually googled if I could find a blog post or something in this topic, but I've failed to find one. Finally, I got the chance, and for the past couple of weekends, I’ve been reading the Pro Git book (which it seems it's the same content as git-scm.com/book). I believe it’s a good practice to write a blog post about a topic after finishing a book (teaching is a good way of submitting knowledge in memory). To my surprise, creating a Git commit using plumbing commands was already covered in the final chapters of the book. I thought it would be a good idea to simplify that process and write a blog post which can be read under 10 minutes, allowing those who haven’t read the book yet (like myself in the past) to get a basic understanding of what Git is doing under the hood.
> But why is nobody finding or reading the later chapters in the docs?
I think to read the latest chapter of a book, one usually needs to read the earlier ones too. I personally don't jump directly to the internals when I want to read about something, because I'd then assume I am missing a lot of context and background.
I don't think any language sparks joy the way Rust does -- not even Lisp and certainly not C# or F#. Rust gives the compiler lower memory overhead and attracts a broad base of developers.
The entire reason we have people trying to cram JavaScript everywhere is that, for some reason, it sparks joy for them.
Hell, even going back, K&R vs Allman brace style was about what sparked joy. Color themes? Spark joy. At this point, I am convinced that the way to get a software developer to do anything is to just play on that need for joy.
That would explain frontend, yes. But that would not explain NodeJS, JavaScript for embedded microcontrollers, and so many more.
Some might say it's Stockholm syndrome, but then there's the Python developers doing the same thing everywhere. And the LISP enthusiasts. So unless there's a lot of Stockholm syndrome (although it's not out of the question when it comes to LISP), a lot of it is easily explained by "me like language here, me go put language elsewhere to keep language"
You're overthinking it. We have Javascript on the server because Javascript was a very commonly known language already (due to being the only choice for web) and the average programmer (one who doesn't post on these forums) is not going to bother to learn another language if they don't have to.
The easiest way to turn front end developers into full stack developers was to let them use the same language for everything, and it was easier to bring JavaScript to the server than to bring any other language to the web.
These days with WASM and JavaScript as compilation targets it might have been avoided, but it's already too late.
Anyway I rambled a bit but the point is "joy" has nothing to do with it.
I think that's just entrenchment at work. When the majority of your developers only know x languages, and one of them is always js, you tend to work by the lowest comments denominator and the tools that everyone can work with are the ones that stick.
Any decent developer can become competent with a new language in a matter of weeks, whereas making a large application with the wrong choice of language and tooling can become a maintenance nightmare. Given how difficult it is to verify JavaScript programs as "correct", its use on the server has always riled me up.
> But that would not explain NodeJS, JavaScript for embedded microcontrollers, and so many more.
I'm only going to address NodeJS because I think that niche projects that target niche workloads are just unrelated.
I believe it does explain NodeJS. The idea is simple - your frontend engineers can own your edge services/ APIs. They're already heavily invested in JS and, at least at one point, there was a theory that unifying languages across frontend + edge made sense for other reasons.
I think that's just entrenchment at work. When the majority of your developers only know x languages, and one of them is always js, you tend to work by the lowest comments denominator and the tools that everyone can work with are the ones that stick.
I think the Javascript example is more about what people are already familiar with. Same goes for languages like Python. Other languages can't "spark joy" if you're not familiar or comfortable with them.
While this is true on individual level, Rust has been ranked most liked programming language in many surveys and sparks joy to most of people more than other languages
What's missing from a survey like that is correlation with prior experience and education. Python was a previous winner on those kind of surveys, and it's a terrible language in many ways.
Where you are failing here is likely empathy, or understanding how other people work or think. This might be because lack of experience in programming, or just because of reflexive negativity. Even if you think Python is a terrible language, for many people it’s the best language for the job. This also includes must PhDs, because Python is the most used languge in science and academia.
When doing critical thinking, please set asides your own biases and try to understand that your own viewpoint is not always the correct one.
This answer looks unjustifiably overheated for a statement that says X is "terrible in many ways". It can be the best tool for someone, and still be terrible in many ways, no contradiction here.
Oh I left PHP and Go out of my list, my subconscious blocked them out of existence. They'd be near the bottom anyway. I suppose Go with polymorphic types might be worth another look, but before that it was dead to me.
We developed the definitional model of a financial instrument modeling and simulation system in SML. The system included a DSL for specifying instrument models, so part of what the SML implementation did was act as a denotational semantics for that language.
The intent was never to deploy it using SML, it was to flesh out and formalize the design, and act as a specification for development of the full system.
The system was designed to run on a cluster long before the Kubernetes days, and so had to deal with many of the distribution, scaling, and concurrency issues itself, in addition to its business logic.
One of the experiences I had while working on that system was that at the time, I was reading "Monad Transformers and Modular Interpreters" - a seminal paper by Liang, Hudak, and Jones - and I thought I could probably quite easily implement the paper's approach using ML parameterized modules, instead of the Haskell typeclasses used in the paper. I tried it, and was soon disabused of that notion. Partly as a result of that, we wrote the distributed simulation engine for a later version of the system in Haskell.
I think you are a bit detached from reality if you think controlling for experience and education would massively change the picture for python, which evidently enjoys above average popularity with anyone from junior devs to elite programmers (such as Google's Director of Research, geohot etc.).
I cannot help but notice that Perl used to be very popular in early web development, sys administration, and in a a row of scientific fields. And lots of people were enthusiastic about it (as witnessed by CPAN content). Then it was declared unsuitable for teaching (many ways to do everything), and the next gen who started with Python usually claim that Perl is bad, while among older generation who matured cooking Perl scripts you still can hear that Python is nothing more than a less flexible, less expressive Perl's bastard child. So yes, I think, education (and the first usable programming language learned in particular) influence popularity very much.
What makes you think so? If you've got a background in programming languages, maybe you'll roll your eyes at Guido not getting reduce or tail recursion, or CPython's unsophisticated runtime and under-specified/messy semantics, or whatever, but I'm kind of struggling to see why you'd be particularly down on python compared to other languages that (in the average case) have non-negative utility for software development.
I mentioned Python simply because it was ranked highly in some of these kinds of surveys, even though it has quite a few well-known problems as a language. It demonstrates that ranking in such surveys is not necessarily correlated with the capabilities or fitness for purpose of the language.
In some ways, Python is in fact worse than many of the mainstream alternatives - for example, it's not unusual for people to look to rewrite Python systems in some other language because of performance issues at scale. Same (if not worse) goes for Ruby.
Languages with built-in checks for correctness, like type checking and borrow checking, help you ship software faster in the long run, cheaper, and with fewer bugs.
YMMV, of course, I have a hard enough time managing with one style of brackets, Rust has, what, five (round brackets, square brackets, curlies, angles, and vertical bars)? Never mind the fact that some operators are prefix:
do_something(a_thing);
Others are infix:
x = a + b * c;
And yet others are postfix:
foo.frobnicate()?;
My sole braincell can't handle all that complexity, which is why I'll just stick with simple old Lisp.
Yes. Rust's compiler, rustc, is a bunch of Rust code that parses Rust syntax into an Abstract Syntax Tree to which it can apply macros, then this is lowered through two levels of intermediate representations, where more Rust code does the type inference and type checking and eventually borrow checking and various optimisations, until eventually it's lowered to LLVM Intermediate Representation LLVM IR, which LLVM will further optimise then produce machine code for the target architecture.
I know you're getting downvoted, but seriously, Rust compilation/linking does take a long time, especially if you enable fat LTO. The upside is it's doing a lot more for you than say the C or C++ compiler would. It's a very small price to pay for what you get.
You can kind of compile (a version of) Rust into C and then compile that as well, but I assume miscompiles are common and there's very little checking done. It's used to bootstrap the compiler for Debian.
Really puts me off learning it, other than dipping my toes in functional programming and being experienced in C# and the .NET Core/Framework libraries. I thought it may be easier to start with F# before moving to another functional language.
The tooling (Visual Studio, LSPs, etc.) around F# is very lacklustre, I wish it had better support from MS. At the moment F# just seems to be a playground for experimenting with FP ideas that then get ported to C#.
As for learning a functional language, I recommend this Haskell tutorial[0], and accompanying video series of an experienced haskeller (Brian McKenna) running through it[1]. I've read countless texts and tutorials explaining Haskell and FP to me but it didn't fully click until I saw someone with experience using the language and tooling effectively.
There are a few other video series from other experienced haskellers available that may be better, too, I just haven't watched them so I cannot comment.
I appreciate the response. As someone that struggles with higher-level math, am I going to get a benefit out of starting with Haskell? I have built financial systems in various languages, so my issue isn't necessarily how to code math functionality. It's more of the abstract concepts that get me.
The abstract concepts give you the tools to quickly classify a datatype. They are not limited to Haskell and let you ask the right questions of datatypes in any language.
The foundational concept is functoriality, i.e. mapping over the argument of a type. Whether a datatype has an instance of
1. (covariant) Functor
2. Contravariant functor
3. Functor+Contravariant (phantom argument), or
4. neither
says a lot about its structure and tells me what further questions I can ask (what hierarchies I can expect): A datatype can only be a Monad if is covariant. It can only be Divisible and Decidable if it is contravariant. If it is both (3.) then its argument is not used (phantom) and can be mapped to any type. If it is neither (4.) it can be invariant (where the argument appears in both positive and negative position: like the Endo datatype) or a more complicated datatype like GADT, which would require a more complicated functor.
I believe you will, yes. You may not be able to jump right in to some of the deep discussions about type categories, I certainly can't, but I still have learned a lot from Haskell and both continue to use Haskell itself and use what it has taught me in other languages to my betterment.
Thank you for the explanation and suggestion. Sounds like it will be worth my effort to at least learn, just for new ways of developing, and how that affects any other language I touch.
As a quick example, a lot of what @Iceland_jack has mentioned I can make sense of, but I wouldn't have been able to contribute to that. I guess it is a bit like learning a foreign language and I am at the point of being able to understand what a native speaker is saying to me, but my responses are probably garbled nonsense. :) However I am still confident that what I have learnt has empowered me to be a better developer.
It's insane how Islamic Republic wants to speed up its downfall and start another popular uprising with things like this. The walls of Iranian cities are still full of "Woman Life Freedom" and "Death to Khamenei" slogans.
This man is marvelous. Even though I know how top-notch he is at writing interactive blog posts, he surprises me with his quality every time I open his new blog posts. Bartosz is a huge inspiration for me.
I don't make this comparison lightly but I'm reminded of Leonardo DaVinci. How much talent does one need to create something like this? It's not enough to be just 'good' at engineering, design, watchmaking, and writing... you have to be amazing at it ALL of it. AND have motivation to do it.
The fact that "Child Sexual Abuse imagery" is the only use case your brain could think of for Steganography says a lot about you. I wish I could report and block you a thousand times.
Report me for what? Being realistic about what a "hide a picture in a picture" technology is probably going to be used for? What an extreme of overreaction.
That's like inventing a "remote roadside detonation device" and claiming it will only be used for parades to activate confetti payloads. It's naive.
> But why is nobody finding or reading the later chapters in the docs?
I think to read the latest chapter of a book, one usually needs to read the earlier ones too. I personally don't jump directly to the internals when I want to read about something, because I'd then assume I am missing a lot of context and background.