Hacker Newsnew | past | comments | ask | show | jobs | submit | bos's commentslogin

Really nice to see a solidly valuable project develop a sustainable foundation instead of turning into yet another VC-backed devtools startup that will inevitably die in a few years.


Rather thank IBM for paying Mitchell an outrageous amount of money for Hashicorp, so he can devote all of his time on awesome projects like Ghostty without ever thinking about sustainable income ever again.

So thanks, IBM! <3


While you're not wrong, I think this undersells a little how much Mitchell has given of his time to OSS. Yes, he's fortunate that he doesn't have to worry about money, but even when he did, he still contributed openly and freely.

That's part of what drew lots of us to HashiCorp in the first place - giving back.


It's a little tongue-in-cheek, but as you can see elsewhere in this discussion thread he mentions this himself on his own X account:

"get asked the same about terminals all the time. “How will you turn this into a business? What’s the monetization strategy?” The monetization strategy is that my bank account has 3 commas mate."

https://x.com/mitchellh/status/1964785527741427940

Take a good guess where the three commas come from.


Tres Commas!


I didn't think it was possible for anyone to express this thought more obnoxiously than DHH but here we are.


The obnoxious one here is the person obsessed with monetization, not the person who throws their ignorance back in their face. Every hobby these days has to be monetized; it's fucking gross.


Eh; it's maybe dumb to suggest the only way for a project to be sustainable is to monetize it, but responding with "I'm rich, you peasant, I'm above such concerns" is infinitely worse.


Is DHH (David Heinemeier Hansson) worth 100M USD? Google results say he is worth about 50M USD... so "only" two commas.


Three comma club is for billionaires.


I think the bigger thing here is that even with three commas in his bank account he lacks the good sense to not associate with DHH.


>The monetization strategy is that my bank account has 3 commas mate."

Having money doesn't mean that you'll have the motivation to continue working on something for free forever.


Free work is the most rewarding work on every metric but monetization in my experience, and when you hit road bumps you can pay your way out of it to keep going. Sounds like the literal dream


Without turning this into a brag session, this is my experience. I don't have to worry about money anymore, so I get to work on cool projects at my own pace, do things that probably sound pointless to most, and it doesn't matter if it's successful. The important thing is that I'm interested.

I'm not as talented as Mitchell tho.


There are a ton of different projects one can devote free work to. Eventually one will get bored and want to change things up.


Money begets the freedom to work on causes. Monetization was always a core part of Hashicorp, rather than being a bolt-on after years of OSS. Which is a good thing. (I was a customer of the first commercial offering from Hashicorp, their VMWare add-on for Vagrant)


But when you already have money, you can skip the “how can I work on this and not starve to death?” part.


IBM did not do that, HashiCorp was a public company before their acquisition.


I always feel weird thanking IBM. On one hand, they've funded numerous FOSS projects, and made the thinkpad, an amazing CPU architecture (PPC), and seem to be the only ones actually innovating in the tech space sometimes. On the other hand, they bought Redhat and seem actively hostile to any FOSS projects that don't make them money


I'd rather he'd still be working on Nomad to be honest, but Ghostty is a good consolation prize ;)


There are hundreds of thousands of software engineers who, given FU amounts of money, would absolutely keep writing software and do it only for the love of it. The companies that hire us usually make us sign promises that we won't work on side projects. Even if there are legal workarounds to that, it's not quite so simple.

Even still, whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.


> whatever high salaries they do give us just flow right back into the neighborhoods through insane property values and other cost-of-living expenses that negate any gains. So, it’s always just the few of us who can win that lottery and truly break out of the cycle.

You break out of the cycle by selling your HCOL home and moving to LCOL after a few years. That HCOL home will have appreciated fast enough given the original purchase price that the growth alone would easily pay for a comparable home in a LCOL area. This is the story of my village in Texas, where Cali people have been buying literal mansions after moving out of their shitboxes in LA and the Bay Area.


moonlighting is permitted by law in California (companies legally can't prevent you from doing it, iiuc), as long as there's no conflict of interest with your main job...


"no conflict of interest" is basically meaningless if your day job is writing software. These clauses you sign are quite broad in what that scope of conflict could be.

Every company I've worked for has had very explicit rules that say, you must get written permission from someone at some director or VP level sign off on your "side project," open source or not.

You might want to check your company guidelines around this just to make sure you're safe.


Side projects that aren't a conflict of interest when working at Google is rather limiting. Likely less so for small companies.


Not really, in my personal experience and per my friends, most of big companies are pretty lenient about it, except for Apple.


No, they're pretty strict. It just changes what you are allowed to do, with Apple being very restrictive in not letting you do it at all.


As long as you don't use their hardware to do it.


that goes without saying, but it's still not free permission when you use your own stuff.


Good. Maybe they'll add search to the terminal now. /s


https://twitter.com/mitchellh/status/1993728538344906978 - "Ghostty on macOS now has search [...] GTK to follow soon" - November 26th 2025


GTK is also merged. Main branch has search. Its also exposed via libghostty for embedders.


But only in the the tip (nightly) build. I'm somewhat tempted to switch to them for this.


A while ago I compiled Ghostty from HEAD, because it had a bug fix I cared for. It was a very stable and pleasant experience. No hassle whatsoever.


If you'd like you can also use `tip` as the update channel to get the nightly build binary without having to compile it yourself: https://ghostty.org/docs/config/reference#auto-update-channe...


Ah. Cool!


If you need to do that again note that there is asdf plugin as well ;-)

For Linux compiling is actually the only way to get tip.


I'm hoping they'll get around to supporting command-number for switching between windows[0]. command-` is fine but clunky as hell when you have more than three or four windows. Without command-number, I'm still stuck using iTerm2 as my daily driver.

(It'd be nice if it supported other standard macOS UI conventions[1] too)

[0] https://github.com/ghostty-org/ghostty/discussions/8131

[1] https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...


A little unfair that this is downvoted. No search is like a dealbreaker for me. I'm happy with iTerm and for 99% of my use cases I don't need a "very fast" terminal. Thanks for pointing this out.

Seems I will wait a little longer before search is in the regular build (and not nightly ones)


Ghostty 1.3 will release in March.


"sustainable foundation" it's still one guy funding it, no? seems as sustainable as before


You can't build a house without the foundation (pun intended).

I said in the linked post that I remain the largest donor, but this helps lay bricks such that we can build a sustainable community that doesn't rely on me financially or technically. There simply wasn't a vehicle before that others could even join in financially. Now there is.

All of the above was mentioned in the post. If you want more details, please read it. I assume you didn't.

I'll begin some donor reach out and donor relationship work eventually. The past few months has been enough work simply coordinating this process, meeting with accountants and lawyers to figure out the right path forward, meeting with other software foundations to determine proper processes etc. I'm going to take a breather, then hop back in. :)


33 additional people funding it as of this announcement: https://hcb.hackclub.com/ghostty/transactions


How do you expect that to change? What is the next step in your mind? Maybe asking for donations? If only he would set up some way that the general public could contribute money to the project! That’d be the smart thing to do. Then he could write a blog post about it, and maybe someone would post a link to HN. That’d really be something.


To be fair, that one guy happens to be the OG Mitchell Hashimoto, who's worth a giant pile of money from selling terraform to IBM, and he's the guy actually writing it in the first place, so I don't think that's, like, a terrible horrible no good issue.


This is a bizarre essay by someone who understands neither functional programming nor the history of computers.

> To be kind, we’ve spent several decades twisting hardware to make the FP spherical cow work “faster”, at the expense of exponential growth in memory usage, and, some would argue, at the expense of increased fragility of software.

There is not one iota of support for functional programming in any modern CPU.


Totally agree. In addition, one of his examples (Mars Pathfinder) has absolutely nothing to do with functional programming or simplifying assumptions of any kind. The mars pathfinder problem was caused by a priority inversion on a mutex - exactly the sort of thing that all programmers rightly consider hard and that things like software transactional memory in FP would prevent. Here’s the famous email “What Really Happened on Mars?” which was written by the a pathfinder software engineer and explains the issue

https://dataverse.jpl.nasa.gov/dataset.xhtml?persistentId=hd...

Even by the standards of substack TFA is an extraordinarily poor blogpost.


The definition of spherical cow is also butchered beyond recognition.

Spherical cows are about simplifying assumptions that lead to absurd conclusions, not simplified models or simplified notation in general.

Calling functional programming a spherical cow when you mean that automatic memory management is a simplifying assumption, is such a gross sign of incompetence that nobody should keep reading the rest of the blog.


> Spherical cows are about simplifying assumptions that lead to absurd conclusions

There aren’t any commonly-accepted conclusions from spherical cows because the bit is the punch line. It’s a joke a physics 101 student makes when toughing through problems that assume away any real-world complexity and thus applicability.

Spherical cows, in the real world, are pedagogical tools first, approximations second, and mis-applied models by inexperienced practitioners third.

“Hello World” is a spherical cow. Simplifying assumptions about data are spherical cows. (And real dairy farmers implicitly assume flat cows when using square feet to determine how much room and grazing area they need per head.)


Spherical cows?

The joke as I recall it, was a physics student who brags that he can predict the winner of any horserace, so long as all of the horses were perfectly spherical perfectly elastic horses.

I'm actually not sure where cows came in, but maybe there's a different version of the joke out there.


The spherical cow joke generally goes that a farmer has some problems with his cows (maybe it’s how much milk they’re producing I don’t remember) and so his daughter says “you should ask my boyfriend to help - he’s a physicist and really clever”. So the farmer asks the boyfriend and he says “Well, assume the cows are spherical…”

The joke being because when you do mechanics you generally start modelling any problem with a lot of simplifying assumptions. In particular, that certain things are particles- spherical and uniform.


The article posted seems a waste of time. This is a remedy https://youtu.be/6oLvgxLFMKo?si=m38yRNygsLniS3Q9


Spherical cows are very much about simplified models--that's what modeling a cow as a sphere is all about.


Yes, but stupidly so.

It's not an idiom for beautiful simplicity.


Stupidity projected ... it's humor: https://en.wikipedia.org/wiki/Spherical_cow


Trying to be as kind as possible in my interpretation of the article, my take was that the author got stock on the "spherical cow" analogy early on and couldn't let it go. I think there are nuggets of good ideas here which generally tries to talk to leaky abstractions and impedance mis-matches in general between hardware and software, but the author was stuck in spherical cow mode and the words all warped toward that flawed analogy.

This is a great example of why rewrites are often important, in both English essays and blogs as well as in software development. Don't get wedded to an idea too early, and if evidence starts piling up that you're going down a bad path, be fearless and don't be afraid of a partial or even total rewrite from the ground up.


yes. and the two nuggets I took were looking a Unix pipe as a concurrent processing notation and pointing out that the Unix R&D for great notations (or the communication thereof?) stopped right before splitting, cloning and merging concurrent streams. I've rarely seen scripts nicely setting up a DAG of named pipes. I'm not aware of a Unix standard tool that would organize a larger such DAG and make it maintainable and easily to debug.

assuming pointing at a problem counts as nugget.


Agreed, it's really weird.

To the best of my understanding, the author describes the structured imperative programming style used since the 70s as "functional" because most languages used since the 70s offer functions. If so, it makes sense to describe hardware is optimized for what the author calls "functional programming", since hardware has long been optimized for C compilers. It also makes sense to describe callbacks, async, then, thread-safety as extensions of this definition if "functional programming", because yes, they're extensions of structured imperative programming.

There are a few other models of programming, including what people actually call functional programming, or logical programming, or synchronous programming, or odder beasts such as term rewriting, digraphs, etc. And of course, each of them has its own tradeoffs.

But all in all, I don't feel that this article has anything to offer to readers.


This would probably apply better in the ~80s after all the hard work building Lisp/Forth machines


The most credit I could give is that the post itself is a spherical approximation of the subject and the point being made is that they discovered async dataflow programming and think it's underrepresented. I've only seen it compared it to command-line pipes for explanation of the concept, not understanding implementation characteristics.

I agree that code tends to be overrepresented--we don't 'data golf'. Even non-async dataflow oriented programs are much easier to follow, which happens to play exceptionally well with FP.


If you squint so hard that SSA is functional programming[1] and register renaming is SSA, modern CPUs are kind of functional, but that of course has nothing to do with functional programming done by the user, it’s just the best way we know to exploit the state of the art in semiconductors to build CPUs that execute (ostensibly) serial programs.

[1] https://www.cs.princeton.edu/~appel/papers/ssafun.pdf


You’d have to cross your eyes pretty hard but SIMD and GPUs.

But for the classic ALU, I can’t think of anything. Anything that helps FP was probably meant to help with text processing and loops in general.


TFA actually refers to "other spherical cows", not just FP.

Doesn't makes any point very coherently, but it's not exclusively about FP though that gets mentioned a lot.


> TFA actually refers to "other spherical cows"

What does that mean in the context of the comment you reply to - which includes the literal quote about "twisting hardware to make the FP spherical cow work faster”? The article may not be exclusively about FP but nobody said it was.


“spherical cow” seems to be a bizarre, pointless substitution for “encapsulation” or “object oriented programming” depending on the context.


It's a theoretician's trope. "Identical and spherical" is the baseline state of the objects in a system one wishes to model. There's are several jokes with this as the punchline.

An executive is retiring. He's been very fond of horse races, but has been very responsible throughout the years. Now with some free time on his hands, he spends more time than ever at the tracks and collects large amounts of data. He takes his data, along with his conviction that he's certainly onto something, to a friend in research at a nearby university. He convinces his friend to take a look at his data and find a model they can use to win at betting. After many delays, and the researcher becoming more disheveled over months of work, he returns to the retired executive to explain his model. He begins "if we assume all the horses are identical and spherical..."


That author uses it to mean “model”, as he calls a variety of programming models “spherical cows”.

Well, for sure, a core tenet of computer science is that all models of computing are equally powerful in what inputs they can map to what outputs, if you set aside any other details


He uses it to mean all sorts of things. He describes "string interpolation" as a "spherical cow".


> TFA actually refers to "other spherical cows", not just FP

I’m genuinely curious if anyone can derive a consistent definition of what the author thinks a spherical cow is.


Yeah the whole article is absurd. Functional programming is not even remotely mainstream. So not sure who is twisting?


> There is not one iota of support for functional programming in any modern CPU.

But there is a ton of support for speeding up imperative, serial programs (aka C code) with speculative execution, out-of-order execution, etc.


Should say "(2018)".


No, this isn’t a good description of monads. It merely describes a case that shows up sometimes.


Dang, when I made this silly, little comment about FP, I didn't expect to get corrected by a legend in the field!

Thanks for taking the time to respond.


The wreck is actually from a 1986 crash that saw no fatalities.

https://www.facebook.com/199535516767620/posts/4006006656120...


> The wreck is actually from a 1986 crash that saw no fatalities.

I'm obviously as unsure as anybody of the correctness of this analysis, but I found a source that wasn't Facebook for your alternative: https://sacramento.cbslocal.com/2021/06/16/folsom-lake-plane...


I too am generally skeptical of Facebook posts, but this wasn't just some random Facebook user, it was an official post from the Placer County Sheriff's Office. In fact the CBS13 article is simply repeating what the Sherriff's Office had already reported in their Facebook and Twitter posts.


Services like the Internet Archive have a hard time, if they're able to at all, archiving Twitter and Facebook social posts, so having alternate sources is helpful in that regard.



Yes, and in fact this can even be done incrementally and efficiently these days.

One example is the open source tool Infer, which we run on very large bodies of native and Java code at Facebook. http://fbinfer.com


Does Infer actually provide anything for C which you don't get from the clang analyzer, for instance? That wasn't clear when I tried it a while ago.


Let me put Mike's comment into what I think is its proper context. "Poor support for numerical computing" really means "relative to Mike's dream, which is not actually realisable by any programming language today" :-)

Most readers seem to be misinterpreting Mike as anchoring off other popular programming languages of today, whereas he's looking for language features for which there's (a) no consensus that they'll actually be good when they exist, and (b) don't yet exist. (I'm highly skeptical of dependently typed programming.)

I think that there's a case to be made that numeric programming in Haskell, relative to the state of the art of today rather than the year 2100, really isn't so great – but my concerns are very different than Mike's, and revolve around libraries rather than type system features.

Source: have done a bit of Haskell in my day.


You're 95% correct about my view.

I do think that matlab/python are a bit better numerical programming languages than Haskell as-is, but only marginally. This is not just due to the library ecosystem, but also because I think that dynamic languages really are better than the best Haskell2010/GHC8.2 library theoretically possible. There are just some things that the existing type system makes a bit more awkward.


Facebook has large engineering offices in New York, Seattle, London, and Tel Aviv, and smaller presences in Boston and Dublin.

There's also a vast range of engineering projects, from VR through video, mobile apps, machine learning, compilers and programming languages, operating systems, on to data centers.


> Facebook has large engineering offices in New York, Seattle, London, and Tel Aviv, and smaller presences in Boston and Dublin.

Updated my comment to reflect this better. I was just looking at the jobs site earlier and didn't see much in Europe outside London. Wasn't looking at the US.


There have been numerous well-known problems with the Java and Python standard libraries over the years. For instance, the date/time classes in Java were a disaster for a long time, and Python has taken decades to converge on a nearly-good-enough treatment of strings.


Java's stdlib is mindblowing in places.

My favorite is the treatment of iteration. You have immutable collections that support an iterator interface that allows modification and throws a runtime exception (so much for a type system...).

There is an interface that lets you iterate over the elements of something that supports it without allowing removal, but it's not recommended, and it doesn't enable the enhanced for loop that came with Java 1.5: https://docs.oracle.com/javase/7/docs/api/java/util/Enumerat....

It's as if someone said "how wrong can we get this?"


A small correction: GHC uses plenty of type tagging information. In fact, its metadata overhead is relatively high.


Relatively high compared to what? With GHC we have a single-word header on objects, which compares favorably to C# which usually has two-word headers, or Java, which similarly uses one word. Of course, GC-less languages like Rust or C++ have usually no tags at all, but I think it makes more sense to compare among GC-d languages.


Why is this there? Is it to facilitate things like Typeable? I believe that there's no language-level way to do things like runtime type reflection. And even if there were, how would one express a complex type like (Vector (forall a. MyTypeClass a => a -> Int, String))?

I'm also curious if dependently typed languages like Idris, which presumably must be able to have runtime access to type information, handle this stuff.


For values, laziness means there is a tag bit for whether a value is a thunk or evaluated. Sum types use tags to determine which variant is active.

For functions, because a function that takes two arguments and returns a value (a -> a -> a) has the same type as a function that takes one argument and returns a function that takes another argument that returns a value (a -> a -> a), the arity of functions is stored in the tag.

Some of these tags are eliminated by inlining but if you sit down and read some typical Haskell output you'll see a _whole lot_ of tag checks.

Source: spent a lot of time reading GHC output and writing high-performance Haskell code.


In Idris, as far as I know, runtime type information is kept around by default and erased through usage-based optimization (and possibly annotation?)

http://docs.idris-lang.org/en/latest/reference/erasure.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: